Feb 14 13:52:06 crc systemd[1]: Starting Kubernetes Kubelet... Feb 14 13:52:06 crc restorecon[4696]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 13:52:06 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 13:52:07 crc restorecon[4696]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 14 13:52:08 crc kubenswrapper[4750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 13:52:08 crc kubenswrapper[4750]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 14 13:52:08 crc kubenswrapper[4750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 13:52:08 crc kubenswrapper[4750]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 13:52:08 crc kubenswrapper[4750]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 13:52:08 crc kubenswrapper[4750]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.455877 4750 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464743 4750 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464779 4750 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464790 4750 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464799 4750 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464809 4750 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464818 4750 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464830 4750 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464840 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464849 4750 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464857 4750 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464866 4750 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464874 4750 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464882 4750 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464891 4750 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464900 4750 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464908 4750 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464916 4750 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464928 4750 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464938 4750 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464947 4750 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464957 4750 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464980 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464988 4750 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.464997 4750 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465005 4750 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465014 4750 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465024 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465034 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465056 4750 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465077 4750 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465090 4750 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465105 4750 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465155 4750 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465166 4750 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465176 4750 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465184 4750 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465192 4750 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465200 4750 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465209 4750 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465216 4750 feature_gate.go:330] unrecognized feature gate: Example Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465224 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465233 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465242 4750 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465249 4750 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465257 4750 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465268 4750 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465278 4750 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465287 4750 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465294 4750 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465302 4750 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465312 4750 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465320 4750 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465328 4750 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465336 4750 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465346 4750 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465357 4750 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465368 4750 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465378 4750 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465386 4750 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465395 4750 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465403 4750 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465413 4750 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465421 4750 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465429 4750 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465437 4750 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465444 4750 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465452 4750 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465460 4750 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465467 4750 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465475 4750 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.465482 4750 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468056 4750 flags.go:64] FLAG: --address="0.0.0.0" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468105 4750 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468150 4750 flags.go:64] FLAG: --anonymous-auth="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468163 4750 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468175 4750 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468187 4750 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468201 4750 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468213 4750 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468222 4750 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468231 4750 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468241 4750 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468251 4750 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468261 4750 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468270 4750 flags.go:64] FLAG: --cgroup-root="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468279 4750 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468288 4750 flags.go:64] FLAG: --client-ca-file="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468297 4750 flags.go:64] FLAG: --cloud-config="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468305 4750 flags.go:64] FLAG: --cloud-provider="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468315 4750 flags.go:64] FLAG: --cluster-dns="[]" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468329 4750 flags.go:64] FLAG: --cluster-domain="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468339 4750 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468351 4750 flags.go:64] FLAG: --config-dir="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468360 4750 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468371 4750 flags.go:64] FLAG: --container-log-max-files="5" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468383 4750 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468392 4750 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468402 4750 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468411 4750 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468420 4750 flags.go:64] FLAG: --contention-profiling="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468429 4750 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468438 4750 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468448 4750 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468457 4750 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468469 4750 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468478 4750 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468487 4750 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468496 4750 flags.go:64] FLAG: --enable-load-reader="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468507 4750 flags.go:64] FLAG: --enable-server="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468516 4750 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468528 4750 flags.go:64] FLAG: --event-burst="100" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468537 4750 flags.go:64] FLAG: --event-qps="50" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468547 4750 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468557 4750 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468566 4750 flags.go:64] FLAG: --eviction-hard="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468577 4750 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468586 4750 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468595 4750 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468605 4750 flags.go:64] FLAG: --eviction-soft="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468616 4750 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468625 4750 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468634 4750 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468644 4750 flags.go:64] FLAG: --experimental-mounter-path="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468653 4750 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468662 4750 flags.go:64] FLAG: --fail-swap-on="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468674 4750 flags.go:64] FLAG: --feature-gates="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468685 4750 flags.go:64] FLAG: --file-check-frequency="20s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468694 4750 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468704 4750 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468713 4750 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468723 4750 flags.go:64] FLAG: --healthz-port="10248" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468732 4750 flags.go:64] FLAG: --help="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468741 4750 flags.go:64] FLAG: --hostname-override="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468749 4750 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468759 4750 flags.go:64] FLAG: --http-check-frequency="20s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468768 4750 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468778 4750 flags.go:64] FLAG: --image-credential-provider-config="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468786 4750 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468797 4750 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468806 4750 flags.go:64] FLAG: --image-service-endpoint="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468815 4750 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468824 4750 flags.go:64] FLAG: --kube-api-burst="100" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468833 4750 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468842 4750 flags.go:64] FLAG: --kube-api-qps="50" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468853 4750 flags.go:64] FLAG: --kube-reserved="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468862 4750 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468871 4750 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468880 4750 flags.go:64] FLAG: --kubelet-cgroups="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468888 4750 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468897 4750 flags.go:64] FLAG: --lock-file="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468906 4750 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468915 4750 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468925 4750 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468939 4750 flags.go:64] FLAG: --log-json-split-stream="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468948 4750 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468958 4750 flags.go:64] FLAG: --log-text-split-stream="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468967 4750 flags.go:64] FLAG: --logging-format="text" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468978 4750 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468989 4750 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.468997 4750 flags.go:64] FLAG: --manifest-url="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469006 4750 flags.go:64] FLAG: --manifest-url-header="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469018 4750 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469028 4750 flags.go:64] FLAG: --max-open-files="1000000" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469038 4750 flags.go:64] FLAG: --max-pods="110" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469048 4750 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469057 4750 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469066 4750 flags.go:64] FLAG: --memory-manager-policy="None" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469075 4750 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469084 4750 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469092 4750 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469103 4750 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469150 4750 flags.go:64] FLAG: --node-status-max-images="50" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469159 4750 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469168 4750 flags.go:64] FLAG: --oom-score-adj="-999" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469177 4750 flags.go:64] FLAG: --pod-cidr="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469186 4750 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469202 4750 flags.go:64] FLAG: --pod-manifest-path="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469212 4750 flags.go:64] FLAG: --pod-max-pids="-1" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469221 4750 flags.go:64] FLAG: --pods-per-core="0" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469229 4750 flags.go:64] FLAG: --port="10250" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469240 4750 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469248 4750 flags.go:64] FLAG: --provider-id="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469257 4750 flags.go:64] FLAG: --qos-reserved="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469266 4750 flags.go:64] FLAG: --read-only-port="10255" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469276 4750 flags.go:64] FLAG: --register-node="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469285 4750 flags.go:64] FLAG: --register-schedulable="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469293 4750 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469497 4750 flags.go:64] FLAG: --registry-burst="10" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469508 4750 flags.go:64] FLAG: --registry-qps="5" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469519 4750 flags.go:64] FLAG: --reserved-cpus="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469531 4750 flags.go:64] FLAG: --reserved-memory="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469545 4750 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469556 4750 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469567 4750 flags.go:64] FLAG: --rotate-certificates="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469578 4750 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469589 4750 flags.go:64] FLAG: --runonce="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469601 4750 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469613 4750 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469625 4750 flags.go:64] FLAG: --seccomp-default="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469635 4750 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469644 4750 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469653 4750 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469662 4750 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469672 4750 flags.go:64] FLAG: --storage-driver-password="root" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469680 4750 flags.go:64] FLAG: --storage-driver-secure="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469690 4750 flags.go:64] FLAG: --storage-driver-table="stats" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469699 4750 flags.go:64] FLAG: --storage-driver-user="root" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469707 4750 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469718 4750 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469727 4750 flags.go:64] FLAG: --system-cgroups="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469736 4750 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469750 4750 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469760 4750 flags.go:64] FLAG: --tls-cert-file="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469768 4750 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469781 4750 flags.go:64] FLAG: --tls-min-version="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469789 4750 flags.go:64] FLAG: --tls-private-key-file="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469801 4750 flags.go:64] FLAG: --topology-manager-policy="none" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469810 4750 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469819 4750 flags.go:64] FLAG: --topology-manager-scope="container" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469828 4750 flags.go:64] FLAG: --v="2" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469840 4750 flags.go:64] FLAG: --version="false" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469851 4750 flags.go:64] FLAG: --vmodule="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469862 4750 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.469873 4750 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470103 4750 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470183 4750 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470193 4750 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470206 4750 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470218 4750 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470228 4750 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470237 4750 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470246 4750 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470255 4750 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470263 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470273 4750 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470282 4750 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470291 4750 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470300 4750 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470309 4750 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470317 4750 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470325 4750 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470332 4750 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470340 4750 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470348 4750 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470355 4750 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470363 4750 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470372 4750 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470380 4750 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470388 4750 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470395 4750 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470409 4750 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470417 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470426 4750 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470434 4750 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470443 4750 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470451 4750 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470461 4750 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470472 4750 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470482 4750 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470492 4750 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470502 4750 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470512 4750 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470523 4750 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470533 4750 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470543 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470553 4750 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470565 4750 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470574 4750 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470582 4750 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470590 4750 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470598 4750 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470605 4750 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470613 4750 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470621 4750 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470633 4750 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470642 4750 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470654 4750 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470666 4750 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470676 4750 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470687 4750 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470696 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470706 4750 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470716 4750 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470726 4750 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470737 4750 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470751 4750 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470763 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470772 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470781 4750 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470789 4750 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470797 4750 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470805 4750 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470813 4750 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470822 4750 feature_gate.go:330] unrecognized feature gate: Example Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.470831 4750 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.470864 4750 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.484097 4750 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.484350 4750 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484450 4750 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484462 4750 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484469 4750 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484474 4750 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484480 4750 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484487 4750 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484491 4750 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484496 4750 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484503 4750 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484513 4750 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484519 4750 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484525 4750 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484533 4750 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484540 4750 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484548 4750 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484556 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484563 4750 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484570 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484578 4750 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484585 4750 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484590 4750 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484595 4750 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484600 4750 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484605 4750 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484610 4750 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484616 4750 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484621 4750 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484627 4750 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484632 4750 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484637 4750 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484642 4750 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484647 4750 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484652 4750 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484659 4750 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484667 4750 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484673 4750 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484679 4750 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484684 4750 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484689 4750 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484694 4750 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484699 4750 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484704 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484709 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484715 4750 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484720 4750 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484724 4750 feature_gate.go:330] unrecognized feature gate: Example Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484730 4750 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484736 4750 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484748 4750 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484758 4750 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484765 4750 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484772 4750 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484778 4750 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484783 4750 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484788 4750 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484792 4750 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484797 4750 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484802 4750 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484807 4750 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484812 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484816 4750 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484821 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484826 4750 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484833 4750 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484839 4750 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484844 4750 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484850 4750 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484856 4750 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484861 4750 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484867 4750 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.484873 4750 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.484882 4750 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485035 4750 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485045 4750 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485050 4750 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485057 4750 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485062 4750 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485067 4750 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485072 4750 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485077 4750 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485084 4750 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485092 4750 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485098 4750 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485104 4750 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485132 4750 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485138 4750 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485143 4750 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485148 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485153 4750 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485158 4750 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485164 4750 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485169 4750 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485174 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485181 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485187 4750 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485193 4750 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485199 4750 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485204 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485210 4750 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485216 4750 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485223 4750 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485228 4750 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485234 4750 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485238 4750 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485245 4750 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485251 4750 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485257 4750 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485262 4750 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485267 4750 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485273 4750 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485278 4750 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485283 4750 feature_gate.go:330] unrecognized feature gate: Example Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485288 4750 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485295 4750 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485302 4750 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485307 4750 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485313 4750 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485319 4750 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485326 4750 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485331 4750 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485337 4750 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485342 4750 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485347 4750 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485353 4750 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485358 4750 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485363 4750 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485368 4750 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485373 4750 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485378 4750 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485384 4750 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485388 4750 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485393 4750 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485398 4750 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485403 4750 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485408 4750 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485413 4750 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485418 4750 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485423 4750 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485428 4750 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485433 4750 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485439 4750 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485445 4750 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.485451 4750 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.485460 4750 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.486500 4750 server.go:940] "Client rotation is on, will bootstrap in background" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.491416 4750 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.491508 4750 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.492985 4750 server.go:997] "Starting client certificate rotation" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.493024 4750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.493226 4750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-14 04:00:39.934565926 +0000 UTC Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.493396 4750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.525336 4750 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.528404 4750 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.528690 4750 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.543254 4750 log.go:25] "Validated CRI v1 runtime API" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.582994 4750 log.go:25] "Validated CRI v1 image API" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.585761 4750 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.592324 4750 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-14-13-47-43-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.592410 4750 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.626635 4750 manager.go:217] Machine: {Timestamp:2026-02-14 13:52:08.622906129 +0000 UTC m=+0.648895700 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:bbcac0cb-82e6-48a0-97c6-f89f2f92ed82 BootID:c9eaedfc-b89c-47f4-85df-878c35f498b6 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:97:37:ae Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:97:37:ae Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d9:3a:68 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:67:3c:36 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a3:2a:59 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:99:a6:d0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:66:5e:6e:13:ed:42 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:72:df:8a:be:ce:c5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.627066 4750 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.627323 4750 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.631554 4750 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.631948 4750 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.632015 4750 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.634429 4750 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.634465 4750 container_manager_linux.go:303] "Creating device plugin manager" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.635328 4750 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.635373 4750 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.635642 4750 state_mem.go:36] "Initialized new in-memory state store" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.635786 4750 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.640644 4750 kubelet.go:418] "Attempting to sync node with API server" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.640684 4750 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.640712 4750 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.640734 4750 kubelet.go:324] "Adding apiserver pod source" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.640755 4750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.645407 4750 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.647227 4750 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.647281 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.647374 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.648574 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.648691 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.651894 4750 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653670 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653731 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653750 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653764 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653787 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653818 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653832 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653854 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653869 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653883 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653914 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.653929 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.656207 4750 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.657081 4750 server.go:1280] "Started kubelet" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.660377 4750 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.661147 4750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.661643 4750 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:08 crc systemd[1]: Started Kubernetes Kubelet. Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.664326 4750 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.664391 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.664608 4750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.665632 4750 server.go:460] "Adding debug handlers to kubelet server" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.665815 4750 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.665850 4750 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.665830 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:55:03.081985506 +0000 UTC Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.666101 4750 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.666426 4750 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.666495 4750 factory.go:55] Registering systemd factory Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.666520 4750 factory.go:221] Registration of the systemd container factory successfully Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.675840 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.675981 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.675884 4750 factory.go:153] Registering CRI-O factory Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.676040 4750 factory.go:221] Registration of the crio container factory successfully Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.676193 4750 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.676240 4750 factory.go:103] Registering Raw factory Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.676270 4750 manager.go:1196] Started watching for new ooms in manager Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.676547 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.677666 4750 manager.go:319] Starting recovery of all containers Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.679891 4750 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18942143d36bce00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 13:52:08.657030656 +0000 UTC m=+0.683020167,LastTimestamp:2026-02-14 13:52:08.657030656 +0000 UTC m=+0.683020167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.698880 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.699306 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.699628 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.699696 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.699729 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.699765 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.699988 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700196 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700277 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700347 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700381 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700426 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700479 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700528 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700558 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700592 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700640 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700686 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700740 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700784 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700830 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700892 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700927 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.700992 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.701024 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.701051 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.701091 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.701153 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703093 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703231 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703290 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703326 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703354 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703385 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703408 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703433 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703460 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703479 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703496 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703525 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703569 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703590 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703613 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703634 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703654 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703679 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703701 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703717 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703738 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703764 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703802 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703826 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703859 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703887 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703913 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703931 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703955 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703971 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.703991 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704008 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704042 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704073 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704090 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704173 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704210 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704233 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704250 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704269 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704290 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704307 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704332 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704347 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704363 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704385 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704406 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704426 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704445 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704464 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704484 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704501 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704521 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704537 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704553 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704574 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704590 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704609 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704624 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704639 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704657 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704673 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704694 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704710 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704726 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704764 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704780 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704799 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704815 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704829 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704849 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704865 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704881 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704900 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.704918 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705053 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705177 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705235 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705305 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705365 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705403 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705437 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705482 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705535 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705579 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705605 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705662 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705687 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705724 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705749 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705795 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705818 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705847 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705880 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.705902 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.706175 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.706254 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.706323 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.706344 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.706413 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.706435 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.706459 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.706487 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.708281 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.708353 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.708379 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.708412 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.708435 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.708456 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.708478 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.708501 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.708526 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712029 4750 manager.go:324] Recovery completed Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712058 4750 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712107 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712157 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712179 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712204 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712228 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712250 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712269 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712292 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712393 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712418 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712443 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712464 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712486 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712506 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712527 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712548 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712570 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712590 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712615 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712636 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712657 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712706 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712728 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712752 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712772 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712793 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712813 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712832 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712854 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712875 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712897 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712918 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.712939 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713016 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713042 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713089 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713133 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713155 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713175 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713197 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713220 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713242 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713264 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713285 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713309 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713331 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713351 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713392 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713413 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713435 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713461 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713490 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713511 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713535 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713559 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713587 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713607 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713629 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713650 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713674 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713701 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713730 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713784 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713807 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713832 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713853 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713871 4750 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713890 4750 reconstruct.go:97] "Volume reconstruction finished" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.713904 4750 reconciler.go:26] "Reconciler: start to sync state" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.729844 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.736676 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.737289 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.737315 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.738554 4750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.739599 4750 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.739648 4750 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.739675 4750 state_mem.go:36] "Initialized new in-memory state store" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.740464 4750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.740526 4750 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.740566 4750 kubelet.go:2335] "Starting kubelet main sync loop" Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.740626 4750 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 13:52:08 crc kubenswrapper[4750]: W0214 13:52:08.741433 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.741493 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.756790 4750 policy_none.go:49] "None policy: Start" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.758144 4750 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.758179 4750 state_mem.go:35] "Initializing new in-memory state store" Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.767038 4750 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.830276 4750 manager.go:334] "Starting Device Plugin manager" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.830360 4750 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.830378 4750 server.go:79] "Starting device plugin registration server" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.830956 4750 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.830972 4750 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.831496 4750 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.831619 4750 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.831627 4750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.839979 4750 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.840811 4750 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.840916 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.842220 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.842283 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.842305 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.842548 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.842786 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.842839 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.843776 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.843818 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.843835 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.844391 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.844433 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.844450 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.844657 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.844792 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.844835 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.845692 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.845735 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.845754 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.845801 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.845910 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.845931 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.845971 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.846190 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.846235 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.847037 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.847082 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.847082 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.847155 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.847168 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.847105 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.847410 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.847501 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.847540 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.851700 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.851752 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.851766 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.853616 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.853646 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.853683 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.854039 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.854088 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.856145 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.856195 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.856217 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.877834 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.916477 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.916527 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.916554 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.916581 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.916808 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.916905 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.916945 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.916986 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.917034 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.917153 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.917203 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.917224 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.917250 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.917276 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.917323 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.931442 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.933984 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.934046 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.934067 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:08 crc kubenswrapper[4750]: I0214 13:52:08.934106 4750 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 13:52:08 crc kubenswrapper[4750]: E0214 13:52:08.934925 4750 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.018938 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019009 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019039 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019082 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019235 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019293 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019296 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019307 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019367 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019360 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019451 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019539 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019594 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019442 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019642 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019662 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019680 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019685 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019736 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019745 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019765 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019791 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019802 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019851 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019894 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019918 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019938 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.019976 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.020068 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.020157 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.135322 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.137055 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.137150 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.137169 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.137216 4750 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 13:52:09 crc kubenswrapper[4750]: E0214 13:52:09.138071 4750 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.172087 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.183982 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.221731 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: W0214 13:52:09.222551 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d081f13a486a06f8856bb0acc727bae729f79aa59412cad47ec04a55f125a3e4 WatchSource:0}: Error finding container d081f13a486a06f8856bb0acc727bae729f79aa59412cad47ec04a55f125a3e4: Status 404 returned error can't find the container with id d081f13a486a06f8856bb0acc727bae729f79aa59412cad47ec04a55f125a3e4 Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.234459 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.259603 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:52:09 crc kubenswrapper[4750]: W0214 13:52:09.273333 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-72717fa1201d3afcac38ba6c75dd818fa829baaf41a30134a3ea1cbd790d2afd WatchSource:0}: Error finding container 72717fa1201d3afcac38ba6c75dd818fa829baaf41a30134a3ea1cbd790d2afd: Status 404 returned error can't find the container with id 72717fa1201d3afcac38ba6c75dd818fa829baaf41a30134a3ea1cbd790d2afd Feb 14 13:52:09 crc kubenswrapper[4750]: E0214 13:52:09.278887 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Feb 14 13:52:09 crc kubenswrapper[4750]: W0214 13:52:09.287789 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-fe13e73b8b3b3769ae00b974b9dcd38adbdf9b2f397c1fc454a407955fa0bc53 WatchSource:0}: Error finding container fe13e73b8b3b3769ae00b974b9dcd38adbdf9b2f397c1fc454a407955fa0bc53: Status 404 returned error can't find the container with id fe13e73b8b3b3769ae00b974b9dcd38adbdf9b2f397c1fc454a407955fa0bc53 Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.538244 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.540459 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.540508 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.540525 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.540561 4750 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 13:52:09 crc kubenswrapper[4750]: E0214 13:52:09.541352 4750 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 14 13:52:09 crc kubenswrapper[4750]: W0214 13:52:09.543986 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:09 crc kubenswrapper[4750]: E0214 13:52:09.544102 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.663494 4750 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.666523 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:34:11.877688309 +0000 UTC Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.749624 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fe13e73b8b3b3769ae00b974b9dcd38adbdf9b2f397c1fc454a407955fa0bc53"} Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.750922 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"72717fa1201d3afcac38ba6c75dd818fa829baaf41a30134a3ea1cbd790d2afd"} Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.751943 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aa9ca55fc229aa037fe5a641d3831490229da65eafe53c4461ad138f9530d7a2"} Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.753245 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d081f13a486a06f8856bb0acc727bae729f79aa59412cad47ec04a55f125a3e4"} Feb 14 13:52:09 crc kubenswrapper[4750]: I0214 13:52:09.754229 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e03a365337cc20cad6403cc4bd2c588b6215c1d2aa6b10b880aaba5115fe7a8b"} Feb 14 13:52:09 crc kubenswrapper[4750]: W0214 13:52:09.925127 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:09 crc kubenswrapper[4750]: E0214 13:52:09.925217 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:09 crc kubenswrapper[4750]: W0214 13:52:09.933078 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:09 crc kubenswrapper[4750]: E0214 13:52:09.933202 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:10 crc kubenswrapper[4750]: E0214 13:52:10.079995 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Feb 14 13:52:10 crc kubenswrapper[4750]: W0214 13:52:10.196178 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:10 crc kubenswrapper[4750]: E0214 13:52:10.196762 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.342042 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.344534 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.344628 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.344654 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.344706 4750 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 13:52:10 crc kubenswrapper[4750]: E0214 13:52:10.345311 4750 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.663484 4750 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.667215 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:58:24.754108288 +0000 UTC Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.680435 4750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 13:52:10 crc kubenswrapper[4750]: E0214 13:52:10.682392 4750 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.760605 4750 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d" exitCode=0 Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.760706 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d"} Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.760862 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.762497 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.762550 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.762572 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.764696 4750 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96" exitCode=0 Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.764740 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96"} Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.764944 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.766374 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.766419 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.766437 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.768464 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.768537 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520"} Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.768608 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf"} Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.768628 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1"} Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.769582 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.769629 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.769652 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.771577 4750 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a491e1fcb3c23bead440c60873969cbedcaeec7c557d385e756f57c284d205dc" exitCode=0 Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.771682 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a491e1fcb3c23bead440c60873969cbedcaeec7c557d385e756f57c284d205dc"} Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.771725 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.773254 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.773315 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.773337 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.775493 4750 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2" exitCode=0 Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.775551 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2"} Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.775713 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.777322 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.777364 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:10 crc kubenswrapper[4750]: I0214 13:52:10.777382 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.662600 4750 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.667881 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:19:27.081963251 +0000 UTC Feb 14 13:52:11 crc kubenswrapper[4750]: E0214 13:52:11.681885 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="3.2s" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.779715 4750 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="37dbc3b4d612bdd39c87415e2788cbf53a7c9a24a763fdb0e9c95e374568b476" exitCode=0 Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.779777 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"37dbc3b4d612bdd39c87415e2788cbf53a7c9a24a763fdb0e9c95e374568b476"} Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.779898 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.780674 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.780699 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.780709 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.784758 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c95a39773bc8329c56bc3ccefefd82ed52ecc4c69704e09d04af661e4d77e4e9"} Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.784832 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.785803 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.785828 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.785837 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.793239 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1"} Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.793265 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f"} Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.793277 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b"} Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.793340 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.794537 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.794567 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.794583 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.804957 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a"} Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.805023 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e"} Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.805038 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b"} Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.810793 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466"} Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.810984 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.812522 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.812576 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.812589 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:11 crc kubenswrapper[4750]: W0214 13:52:11.932610 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:11 crc kubenswrapper[4750]: E0214 13:52:11.932724 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.946407 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.949371 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.949412 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.949423 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:11 crc kubenswrapper[4750]: I0214 13:52:11.949451 4750 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 13:52:11 crc kubenswrapper[4750]: E0214 13:52:11.949896 4750 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 14 13:52:11 crc kubenswrapper[4750]: W0214 13:52:11.994191 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:11 crc kubenswrapper[4750]: E0214 13:52:11.994302 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:12 crc kubenswrapper[4750]: W0214 13:52:12.261614 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 14 13:52:12 crc kubenswrapper[4750]: E0214 13:52:12.261755 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.415450 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.668027 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:23:26.037410995 +0000 UTC Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.714454 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.726047 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.820508 4750 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="40c598121670555dd4227360e2a758117ef6b8b1484cd0dcad95488fbc45705f" exitCode=0 Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.820628 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"40c598121670555dd4227360e2a758117ef6b8b1484cd0dcad95488fbc45705f"} Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.820673 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.821757 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.821865 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.821884 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.823689 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.823721 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.823898 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824231 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.823665 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701"} Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824487 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c"} Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824513 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824711 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824798 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824868 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824875 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824972 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824985 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.824737 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.825019 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.825029 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.825915 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.825941 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:12 crc kubenswrapper[4750]: I0214 13:52:12.825953 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.668156 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:09:41.431097174 +0000 UTC Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.834830 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.834927 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.834959 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.834989 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.835019 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.835161 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c2bd45e7632fd634b529de607367de8eff79dbd07cf1bc589583cb3f2b093759"} Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.835220 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a415108251af44b7d6dfc8a69118d26a352c2bdde65b059bdbcca7ac9ec7d233"} Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.835240 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"da56be2cf02132c791c4bf70064b411b43ba495e1ebdb417a4d24207b1c91c77"} Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.836501 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.836551 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.836563 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.836674 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.836717 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.836736 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.836683 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.836794 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:13 crc kubenswrapper[4750]: I0214 13:52:13.836811 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.485402 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.668351 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:16:45.061823278 +0000 UTC Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.720641 4750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.848040 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.849249 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d95243d60e33df80e7d2e7de7e98c7b150a8e695115d2c5f75a491b3cb152fc1"} Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.849351 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"200753b8f1b454b5d2dc1219c9fba68d32ee54e7f4368e776fc8dec1e36e395c"} Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.849392 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.849436 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.849443 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.849853 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.850989 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.851027 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.851038 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.851378 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.851450 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.851576 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.851593 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.851538 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:14 crc kubenswrapper[4750]: I0214 13:52:14.851939 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.150001 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.152054 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.152267 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.152440 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.152642 4750 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.669295 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:49:23.498709975 +0000 UTC Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.719706 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.852960 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.853488 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.854736 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.854770 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.854785 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.855768 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.855815 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:15 crc kubenswrapper[4750]: I0214 13:52:15.855827 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.615419 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.621596 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.670094 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:38:29.927164527 +0000 UTC Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.856153 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.856198 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.857865 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.857874 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.857966 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.857988 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.857934 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:16 crc kubenswrapper[4750]: I0214 13:52:16.858046 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:17 crc kubenswrapper[4750]: I0214 13:52:17.485836 4750 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 13:52:17 crc kubenswrapper[4750]: I0214 13:52:17.485946 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 13:52:17 crc kubenswrapper[4750]: I0214 13:52:17.670754 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:05:11.805112062 +0000 UTC Feb 14 13:52:18 crc kubenswrapper[4750]: I0214 13:52:18.671769 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:34:04.562464664 +0000 UTC Feb 14 13:52:18 crc kubenswrapper[4750]: E0214 13:52:18.840148 4750 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 13:52:18 crc kubenswrapper[4750]: I0214 13:52:18.847753 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:18 crc kubenswrapper[4750]: I0214 13:52:18.847949 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:18 crc kubenswrapper[4750]: I0214 13:52:18.849371 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:18 crc kubenswrapper[4750]: I0214 13:52:18.849618 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:18 crc kubenswrapper[4750]: I0214 13:52:18.849811 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:19 crc kubenswrapper[4750]: I0214 13:52:19.455285 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 14 13:52:19 crc kubenswrapper[4750]: I0214 13:52:19.455602 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:19 crc kubenswrapper[4750]: I0214 13:52:19.483046 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:19 crc kubenswrapper[4750]: I0214 13:52:19.483367 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:19 crc kubenswrapper[4750]: I0214 13:52:19.483380 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:19 crc kubenswrapper[4750]: I0214 13:52:19.672941 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:49:07.221252261 +0000 UTC Feb 14 13:52:20 crc kubenswrapper[4750]: I0214 13:52:20.673609 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:08:23.296042794 +0000 UTC Feb 14 13:52:21 crc kubenswrapper[4750]: I0214 13:52:21.673951 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:07:51.402842527 +0000 UTC Feb 14 13:52:22 crc kubenswrapper[4750]: I0214 13:52:22.664491 4750 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 14 13:52:22 crc kubenswrapper[4750]: I0214 13:52:22.674243 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:53:07.466906049 +0000 UTC Feb 14 13:52:23 crc kubenswrapper[4750]: W0214 13:52:23.339694 4750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 14 13:52:23 crc kubenswrapper[4750]: I0214 13:52:23.339891 4750 trace.go:236] Trace[992765973]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 13:52:13.338) (total time: 10001ms): Feb 14 13:52:23 crc kubenswrapper[4750]: Trace[992765973]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:52:23.339) Feb 14 13:52:23 crc kubenswrapper[4750]: Trace[992765973]: [10.001611001s] [10.001611001s] END Feb 14 13:52:23 crc kubenswrapper[4750]: E0214 13:52:23.339938 4750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 14 13:52:23 crc kubenswrapper[4750]: I0214 13:52:23.767173 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:22:29.678067133 +0000 UTC Feb 14 13:52:24 crc kubenswrapper[4750]: E0214 13:52:24.162081 4750 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.18942143d36bce00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 13:52:08.657030656 +0000 UTC m=+0.683020167,LastTimestamp:2026-02-14 13:52:08.657030656 +0000 UTC m=+0.683020167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 13:52:24 crc kubenswrapper[4750]: I0214 13:52:24.257863 4750 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 14 13:52:24 crc kubenswrapper[4750]: I0214 13:52:24.257940 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 13:52:24 crc kubenswrapper[4750]: I0214 13:52:24.265361 4750 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 14 13:52:24 crc kubenswrapper[4750]: I0214 13:52:24.265505 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 13:52:24 crc kubenswrapper[4750]: I0214 13:52:24.768040 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:27:34.96797829 +0000 UTC Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.727624 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.727856 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.729542 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.729603 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.729620 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.732418 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.768238 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:50:42.167506741 +0000 UTC Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.884623 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.884692 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.886052 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.886169 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:25 crc kubenswrapper[4750]: I0214 13:52:25.886191 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.696762 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.697046 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.698797 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.698887 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.698907 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.717550 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.769314 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:55:01.735514845 +0000 UTC Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.886739 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.887865 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.887931 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:26 crc kubenswrapper[4750]: I0214 13:52:26.887952 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:27 crc kubenswrapper[4750]: I0214 13:52:27.486230 4750 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 13:52:27 crc kubenswrapper[4750]: I0214 13:52:27.486361 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 13:52:27 crc kubenswrapper[4750]: I0214 13:52:27.769872 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:58:22.03398038 +0000 UTC Feb 14 13:52:28 crc kubenswrapper[4750]: I0214 13:52:28.771178 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:31:05.194068297 +0000 UTC Feb 14 13:52:28 crc kubenswrapper[4750]: E0214 13:52:28.840410 4750 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 13:52:28 crc kubenswrapper[4750]: I0214 13:52:28.855363 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:28 crc kubenswrapper[4750]: I0214 13:52:28.855603 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:28 crc kubenswrapper[4750]: I0214 13:52:28.857428 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:28 crc kubenswrapper[4750]: I0214 13:52:28.857474 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:28 crc kubenswrapper[4750]: I0214 13:52:28.857487 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.262576 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.266044 4750 trace.go:236] Trace[288072300]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 13:52:18.465) (total time: 10800ms): Feb 14 13:52:29 crc kubenswrapper[4750]: Trace[288072300]: ---"Objects listed" error: 10800ms (13:52:29.265) Feb 14 13:52:29 crc kubenswrapper[4750]: Trace[288072300]: [10.800925387s] [10.800925387s] END Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.266089 4750 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.267623 4750 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.268593 4750 trace.go:236] Trace[1106003040]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 13:52:15.483) (total time: 13785ms): Feb 14 13:52:29 crc kubenswrapper[4750]: Trace[1106003040]: ---"Objects listed" error: 13785ms (13:52:29.268) Feb 14 13:52:29 crc kubenswrapper[4750]: Trace[1106003040]: [13.785248568s] [13.785248568s] END Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.268609 4750 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.268860 4750 trace.go:236] Trace[2012533557]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 13:52:16.739) (total time: 12529ms): Feb 14 13:52:29 crc kubenswrapper[4750]: Trace[2012533557]: ---"Objects listed" error: 12529ms (13:52:29.268) Feb 14 13:52:29 crc kubenswrapper[4750]: Trace[2012533557]: [12.529399144s] [12.529399144s] END Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.268891 4750 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.270400 4750 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.280288 4750 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.310517 4750 csr.go:261] certificate signing request csr-csjbb is approved, waiting to be issued Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.311281 4750 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49066->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.311344 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49066->192.168.126.11:17697: read: connection reset by peer" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.311486 4750 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49078->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.311704 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49078->192.168.126.11:17697: read: connection reset by peer" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.311849 4750 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.311962 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.340533 4750 csr.go:257] certificate signing request csr-csjbb is issued Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.510014 4750 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.772291 4750 apiserver.go:52] "Watching apiserver" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.772576 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:16:11.150844175 +0000 UTC Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.776857 4750 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.777337 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.777807 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.777837 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.777924 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.777914 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.778004 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.778453 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.778991 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.779078 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.779173 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.780709 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.781383 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.781640 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.781834 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.781892 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.782050 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.782308 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.782643 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.782763 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.838926 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.857952 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.867887 4750 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.870887 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873218 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873280 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873318 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873356 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873391 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873427 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873463 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873498 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873540 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873579 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873612 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873651 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873651 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873690 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873729 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873733 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873765 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873850 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873871 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873884 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873913 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873934 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873951 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873974 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.873995 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874015 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874032 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874048 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874074 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874103 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874148 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874177 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874196 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874213 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874231 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874253 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874272 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874289 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874305 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874321 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874344 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874360 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874347 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874379 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874466 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874488 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874489 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874512 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874608 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874653 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874688 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874689 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874788 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874881 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874896 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875043 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875083 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875360 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875423 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875450 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875487 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875589 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875787 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875793 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875808 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.875995 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876016 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876019 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876064 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876180 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876188 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876254 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.874692 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876289 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876299 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876349 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876363 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876389 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876405 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876438 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876468 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876489 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876508 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876511 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876523 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876545 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876568 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876594 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876597 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876617 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876638 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876654 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876682 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876702 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876720 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876718 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876736 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876759 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876784 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876812 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876889 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876922 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876945 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876968 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876986 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877005 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877026 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877047 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877068 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877091 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877154 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877176 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877194 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877213 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877234 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877253 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877271 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877286 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877301 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877319 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877335 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877353 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877371 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877388 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877404 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877431 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877448 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877466 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877482 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877498 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877515 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877538 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877556 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877577 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877599 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877621 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877639 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877657 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877675 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877728 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877747 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877765 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877784 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877805 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877822 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877838 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877856 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877872 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877892 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877908 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877926 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879606 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879645 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879669 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879688 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879708 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879726 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879744 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879766 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879800 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879821 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879843 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879862 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879885 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879904 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879925 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879944 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879965 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879987 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880004 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880022 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880041 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880061 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880090 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880129 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880147 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880166 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880182 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880201 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880225 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880244 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880264 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880281 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880300 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880321 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880339 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880360 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880378 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880397 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880416 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880447 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880465 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880485 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880504 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880526 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880650 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880669 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880694 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880718 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880742 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880765 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880790 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880833 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880852 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880870 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880894 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880912 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880929 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880948 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880966 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881075 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881101 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881142 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881164 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881186 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881209 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881234 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881260 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881279 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881299 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881319 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881338 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881356 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881375 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881397 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881418 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881444 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881472 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881532 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881557 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881581 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881608 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881642 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881668 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881691 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881718 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881743 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881970 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882005 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882830 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882856 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882876 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882971 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882984 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882997 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883008 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883018 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883029 4750 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883040 4750 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883050 4750 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883061 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883071 4750 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883082 4750 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883092 4750 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883102 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883156 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883167 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883190 4750 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883200 4750 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883210 4750 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883221 4750 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883231 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883243 4750 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883252 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883262 4750 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883272 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883282 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883293 4750 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883304 4750 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883314 4750 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883324 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883334 4750 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883344 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883355 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883367 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876721 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876785 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876850 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876916 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.876933 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877050 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877066 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877099 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.878081 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.878658 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.878761 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.878993 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879028 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879242 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879616 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879739 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.879901 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880012 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.880493 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.877929 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881818 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.881883 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.884622 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882363 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882503 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.882523 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.883463 4750 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.883990 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.884336 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.884403 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.884648 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.884826 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:30.384793399 +0000 UTC m=+22.410782890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.884953 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:52:30.384943394 +0000 UTC m=+22.410932885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.885481 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.885944 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.886375 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.889642 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.890151 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.890365 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.892370 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.894641 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.894775 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.894962 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.894992 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.895022 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.895243 4750 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.895251 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.895332 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:30.395309414 +0000 UTC m=+22.421298895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.895399 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.895538 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.895921 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.895959 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.895972 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.896032 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.896049 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.896278 4750 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.900603 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.900896 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.902634 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.902924 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.903006 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.903396 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.903448 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.903680 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.903898 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.904191 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.904264 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.904340 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.904431 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.904499 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.904864 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.904985 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.905057 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.904988 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.905221 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.905376 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.905521 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.905857 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.905933 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.906003 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.906164 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.906856 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.907784 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.908136 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.908388 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.908844 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.911966 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.914895 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.915449 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.915478 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.915493 4750 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.915565 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:30.415542011 +0000 UTC m=+22.441531492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.916398 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.916048 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.916507 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.916810 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.917269 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.917413 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.917558 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.917862 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.918107 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.918421 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.918422 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.918576 4750 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c" exitCode=255 Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.918694 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c"} Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.918703 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.918874 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.920339 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.920411 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.923270 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.923658 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.923824 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.924178 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.923971 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.924327 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.924633 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.924693 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.924978 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.925067 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.925369 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.925678 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.926714 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.927516 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.927874 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.930443 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.930815 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.930995 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.931219 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.932642 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.932818 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.932987 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.933232 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.936027 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.936632 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.937079 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.937151 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.937415 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.937497 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.937736 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.937741 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.937853 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.936983 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.938816 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.938896 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.939034 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.939108 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.939499 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.940072 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.940514 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.941080 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.942646 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.945907 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.946439 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.947235 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.947476 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.947530 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.947693 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.947799 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.948020 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.948134 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.948735 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.949059 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.949483 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.949966 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.949975 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.950248 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.952456 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.954012 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.955999 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.963870 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.966418 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.966608 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.966623 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.966916 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.970635 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.975710 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.975988 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.976426 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.976895 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.985447 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986372 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986411 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986460 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986473 4750 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986482 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986490 4750 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986498 4750 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986506 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986517 4750 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986527 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986536 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986545 4750 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986553 4750 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986563 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986572 4750 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986581 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986590 4750 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986599 4750 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986607 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986616 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986625 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986633 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986643 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986651 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986660 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986669 4750 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986678 4750 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986687 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986696 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986706 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986714 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986723 4750 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986731 4750 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986742 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986751 4750 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986761 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986771 4750 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986780 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986790 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986799 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986809 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986818 4750 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986826 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986836 4750 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986844 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986852 4750 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986860 4750 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986869 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986878 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986887 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986895 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986904 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986913 4750 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986922 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986931 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986940 4750 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986948 4750 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986957 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986966 4750 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986975 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986983 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.986992 4750 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987000 4750 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987009 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987019 4750 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987029 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987037 4750 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987045 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987053 4750 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987062 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987070 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987079 4750 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987088 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987097 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.988297 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.987105 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989466 4750 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989478 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989488 4750 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989497 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989506 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989518 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989527 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989536 4750 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989545 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989554 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989563 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989572 4750 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989581 4750 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989590 4750 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989600 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989609 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989619 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989629 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989638 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989647 4750 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989658 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989667 4750 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989676 4750 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989685 4750 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989694 4750 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989703 4750 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989712 4750 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989722 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989731 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989740 4750 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989749 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989758 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989769 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989778 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989788 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989797 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989808 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989816 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989825 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989837 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989846 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989854 4750 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989863 4750 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989872 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989881 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989889 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989897 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989905 4750 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989913 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989922 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989930 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989938 4750 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989947 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989956 4750 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989964 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989972 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989980 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989990 4750 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.989998 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990008 4750 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990017 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990026 4750 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990034 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990042 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990050 4750 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990059 4750 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990066 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990082 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990090 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990098 4750 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990106 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990133 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990142 4750 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990151 4750 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990160 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990169 4750 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990177 4750 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990185 4750 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990194 4750 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990202 4750 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990215 4750 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990227 4750 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990238 4750 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.990384 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.992206 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: I0214 13:52:29.994134 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.997351 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.997383 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.997401 4750 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:29 crc kubenswrapper[4750]: E0214 13:52:29.997468 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:30.497442367 +0000 UTC m=+22.523431848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.005250 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.012712 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.015567 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.017043 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.029300 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.042682 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.043066 4750 scope.go:117] "RemoveContainer" containerID="f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.047723 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.063254 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.083988 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.091542 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.091575 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.091585 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.091594 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.091603 4750 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.094956 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.106840 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.110576 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.119706 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.121824 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: W0214 13:52:30.122905 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c5d2668fa4802391e56ffe8bcb4fc7d8dcf2ff2f0d8ac97e25c0cadb1827f819 WatchSource:0}: Error finding container c5d2668fa4802391e56ffe8bcb4fc7d8dcf2ff2f0d8ac97e25c0cadb1827f819: Status 404 returned error can't find the container with id c5d2668fa4802391e56ffe8bcb4fc7d8dcf2ff2f0d8ac97e25c0cadb1827f819 Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.123695 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.134899 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.147975 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: W0214 13:52:30.156949 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-67dc952430f95f08ad883724710356b9d941fed32328ac3d84bff28fc30d1721 WatchSource:0}: Error finding container 67dc952430f95f08ad883724710356b9d941fed32328ac3d84bff28fc30d1721: Status 404 returned error can't find the container with id 67dc952430f95f08ad883724710356b9d941fed32328ac3d84bff28fc30d1721 Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.161963 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.343962 4750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-14 13:47:29 +0000 UTC, rotation deadline is 2027-01-03 17:52:40.192623416 +0000 UTC Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.344459 4750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7756h0m9.84816779s for next certificate rotation Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.395517 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.395606 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.395635 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.395756 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:52:31.395720173 +0000 UTC m=+23.421709644 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.395828 4750 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.395862 4750 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.395896 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:31.395875137 +0000 UTC m=+23.421864618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.395929 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:31.395917498 +0000 UTC m=+23.421907199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.497134 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.497290 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.497307 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.497320 4750 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.497368 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:31.497353802 +0000 UTC m=+23.523343283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.551291 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-j5rld"] Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.551592 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-78xwc"] Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.551757 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-78xwc" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.552311 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.555854 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.556454 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.556597 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.556702 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.556931 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.557087 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.557525 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.558177 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.583734 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.592606 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.597874 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/581740c6-1f28-4471-8131-5d5042cc59f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.597925 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czxln\" (UniqueName: \"kubernetes.io/projected/581740c6-1f28-4471-8131-5d5042cc59f5-kube-api-access-czxln\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.597951 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3a555a0c-f608-450a-b6aa-28dedd5b5e34-hosts-file\") pod \"node-resolver-78xwc\" (UID: \"3a555a0c-f608-450a-b6aa-28dedd5b5e34\") " pod="openshift-dns/node-resolver-78xwc" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.597978 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk2w9\" (UniqueName: \"kubernetes.io/projected/3a555a0c-f608-450a-b6aa-28dedd5b5e34-kube-api-access-fk2w9\") pod \"node-resolver-78xwc\" (UID: \"3a555a0c-f608-450a-b6aa-28dedd5b5e34\") " pod="openshift-dns/node-resolver-78xwc" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.598074 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/581740c6-1f28-4471-8131-5d5042cc59f5-proxy-tls\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.598163 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.598219 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/581740c6-1f28-4471-8131-5d5042cc59f5-rootfs\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.598420 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.598463 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.598476 4750 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:30 crc kubenswrapper[4750]: E0214 13:52:30.598546 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:31.59852791 +0000 UTC m=+23.624517391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.601920 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.613296 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.624482 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.634375 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.644090 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.658264 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.672369 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.685417 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.697910 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.699269 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/581740c6-1f28-4471-8131-5d5042cc59f5-proxy-tls\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.699347 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/581740c6-1f28-4471-8131-5d5042cc59f5-rootfs\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.699376 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/581740c6-1f28-4471-8131-5d5042cc59f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.699433 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3a555a0c-f608-450a-b6aa-28dedd5b5e34-hosts-file\") pod \"node-resolver-78xwc\" (UID: \"3a555a0c-f608-450a-b6aa-28dedd5b5e34\") " pod="openshift-dns/node-resolver-78xwc" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.699493 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/581740c6-1f28-4471-8131-5d5042cc59f5-rootfs\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.699621 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3a555a0c-f608-450a-b6aa-28dedd5b5e34-hosts-file\") pod \"node-resolver-78xwc\" (UID: \"3a555a0c-f608-450a-b6aa-28dedd5b5e34\") " pod="openshift-dns/node-resolver-78xwc" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.700370 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/581740c6-1f28-4471-8131-5d5042cc59f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.699463 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czxln\" (UniqueName: \"kubernetes.io/projected/581740c6-1f28-4471-8131-5d5042cc59f5-kube-api-access-czxln\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.700516 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk2w9\" (UniqueName: \"kubernetes.io/projected/3a555a0c-f608-450a-b6aa-28dedd5b5e34-kube-api-access-fk2w9\") pod \"node-resolver-78xwc\" (UID: \"3a555a0c-f608-450a-b6aa-28dedd5b5e34\") " pod="openshift-dns/node-resolver-78xwc" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.713273 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.725817 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.736504 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.746193 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.746918 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.748037 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.748648 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.749631 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.750123 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.750749 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.751850 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.752540 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.753663 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.754371 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.758406 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.759059 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.759601 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.760672 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.761286 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.762017 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/581740c6-1f28-4471-8131-5d5042cc59f5-proxy-tls\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.762039 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czxln\" (UniqueName: \"kubernetes.io/projected/581740c6-1f28-4471-8131-5d5042cc59f5-kube-api-access-czxln\") pod \"machine-config-daemon-j5rld\" (UID: \"581740c6-1f28-4471-8131-5d5042cc59f5\") " pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.762216 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk2w9\" (UniqueName: \"kubernetes.io/projected/3a555a0c-f608-450a-b6aa-28dedd5b5e34-kube-api-access-fk2w9\") pod \"node-resolver-78xwc\" (UID: \"3a555a0c-f608-450a-b6aa-28dedd5b5e34\") " pod="openshift-dns/node-resolver-78xwc" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.762319 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.762749 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.764321 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.765428 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.766047 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.767835 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.768393 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.770486 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.771185 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.772011 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.772630 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.773342 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:44:21.072388549 +0000 UTC Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.773732 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.774225 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.774787 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.775689 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.776220 4750 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.776371 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.777017 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.779047 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.779613 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.780001 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.782147 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.782816 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.783708 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.784578 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.785599 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.786038 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.786642 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.787609 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.788582 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.789042 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.790010 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.790511 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.791648 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.792128 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.792917 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.793547 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.794061 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.794899 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.795538 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.796456 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.805856 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.868212 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-78xwc" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.875347 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:52:30 crc kubenswrapper[4750]: W0214 13:52:30.884139 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a555a0c_f608_450a_b6aa_28dedd5b5e34.slice/crio-36e1d05cdf9a3dc42eaf6c886cb857d197c81fe21f216b1397dd2cc9f5e2f018 WatchSource:0}: Error finding container 36e1d05cdf9a3dc42eaf6c886cb857d197c81fe21f216b1397dd2cc9f5e2f018: Status 404 returned error can't find the container with id 36e1d05cdf9a3dc42eaf6c886cb857d197c81fe21f216b1397dd2cc9f5e2f018 Feb 14 13:52:30 crc kubenswrapper[4750]: W0214 13:52:30.894225 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod581740c6_1f28_4471_8131_5d5042cc59f5.slice/crio-8a933bbb184d5e231210b0a104dde9d45bfd2db9f87082b73ae0e02394c92504 WatchSource:0}: Error finding container 8a933bbb184d5e231210b0a104dde9d45bfd2db9f87082b73ae0e02394c92504: Status 404 returned error can't find the container with id 8a933bbb184d5e231210b0a104dde9d45bfd2db9f87082b73ae0e02394c92504 Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.925394 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.925848 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jd2lx"] Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.926502 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-n59sl"] Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.926720 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-n59sl" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.927160 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.929641 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.929805 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.930007 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.931061 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.931227 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.931560 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.932024 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.936019 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08"} Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.936840 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.938194 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-78xwc" event={"ID":"3a555a0c-f608-450a-b6aa-28dedd5b5e34","Type":"ContainerStarted","Data":"36e1d05cdf9a3dc42eaf6c886cb857d197c81fe21f216b1397dd2cc9f5e2f018"} Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.939696 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"67dc952430f95f08ad883724710356b9d941fed32328ac3d84bff28fc30d1721"} Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.940614 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"8a933bbb184d5e231210b0a104dde9d45bfd2db9f87082b73ae0e02394c92504"} Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.944167 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:30Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.945402 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157"} Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.945450 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64"} Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.945463 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"12889c262aa16c018439280c2e55bad34f653f9d7fb12c31d2b4c8f3fe1e3a03"} Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.949840 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2"} Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.949878 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c5d2668fa4802391e56ffe8bcb4fc7d8dcf2ff2f0d8ac97e25c0cadb1827f819"} Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.960492 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:30Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.977395 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:30Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:30 crc kubenswrapper[4750]: I0214 13:52:30.991678 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:30Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003594 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003671 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-socket-dir-parent\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003704 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-system-cni-dir\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003726 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7475461f-e0e5-4d5e-91fd-bfe8fb575146-cni-binary-copy\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003755 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-hostroot\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003772 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-run-multus-certs\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003790 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-run-k8s-cni-cncf-io\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003807 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-run-netns\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003910 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-daemon-config\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.003965 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2617686e-5f7f-40a4-9654-fee29bbd1d71-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004173 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-system-cni-dir\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004287 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-var-lib-cni-multus\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004369 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-var-lib-cni-bin\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004431 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-etc-kubernetes\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004461 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-os-release\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004510 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-cni-dir\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004531 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-cnibin\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004556 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8pp9\" (UniqueName: \"kubernetes.io/projected/7475461f-e0e5-4d5e-91fd-bfe8fb575146-kube-api-access-z8pp9\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004627 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2617686e-5f7f-40a4-9654-fee29bbd1d71-cni-binary-copy\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004673 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-os-release\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004844 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-conf-dir\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004873 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdr75\" (UniqueName: \"kubernetes.io/projected/2617686e-5f7f-40a4-9654-fee29bbd1d71-kube-api-access-bdr75\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004916 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-cnibin\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.004974 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-var-lib-kubelet\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.005873 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.019015 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.043480 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.060041 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.076779 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.093772 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105620 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-cnibin\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105657 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8pp9\" (UniqueName: \"kubernetes.io/projected/7475461f-e0e5-4d5e-91fd-bfe8fb575146-kube-api-access-z8pp9\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105685 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-cni-dir\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105707 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-os-release\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105728 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2617686e-5f7f-40a4-9654-fee29bbd1d71-cni-binary-copy\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105752 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-conf-dir\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105779 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdr75\" (UniqueName: \"kubernetes.io/projected/2617686e-5f7f-40a4-9654-fee29bbd1d71-kube-api-access-bdr75\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105801 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-var-lib-kubelet\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105825 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-cnibin\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105834 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-cni-dir\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105879 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105908 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-socket-dir-parent\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105947 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-system-cni-dir\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105969 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7475461f-e0e5-4d5e-91fd-bfe8fb575146-cni-binary-copy\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106135 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-socket-dir-parent\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106174 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-system-cni-dir\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106034 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-var-lib-kubelet\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106079 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-os-release\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106136 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-cnibin\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106171 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-hostroot\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106200 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-hostroot\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106329 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-run-multus-certs\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106362 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-run-multus-certs\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106376 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-run-netns\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.105983 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-conf-dir\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106449 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-run-netns\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106470 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-run-k8s-cni-cncf-io\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106504 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2617686e-5f7f-40a4-9654-fee29bbd1d71-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106537 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-daemon-config\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106565 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-system-cni-dir\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106592 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-var-lib-cni-multus\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106613 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106624 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-var-lib-cni-bin\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106663 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-var-lib-cni-bin\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106685 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-run-k8s-cni-cncf-io\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106731 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-host-var-lib-cni-multus\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106714 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-system-cni-dir\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106665 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-etc-kubernetes\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106803 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-os-release\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106702 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-etc-kubernetes\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106908 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7475461f-e0e5-4d5e-91fd-bfe8fb575146-os-release\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.106983 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2617686e-5f7f-40a4-9654-fee29bbd1d71-cnibin\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.107307 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7475461f-e0e5-4d5e-91fd-bfe8fb575146-cni-binary-copy\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.107390 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7475461f-e0e5-4d5e-91fd-bfe8fb575146-multus-daemon-config\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.107608 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2617686e-5f7f-40a4-9654-fee29bbd1d71-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.108017 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2617686e-5f7f-40a4-9654-fee29bbd1d71-cni-binary-copy\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.109236 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.124097 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8pp9\" (UniqueName: \"kubernetes.io/projected/7475461f-e0e5-4d5e-91fd-bfe8fb575146-kube-api-access-z8pp9\") pod \"multus-n59sl\" (UID: \"7475461f-e0e5-4d5e-91fd-bfe8fb575146\") " pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.124734 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdr75\" (UniqueName: \"kubernetes.io/projected/2617686e-5f7f-40a4-9654-fee29bbd1d71-kube-api-access-bdr75\") pod \"multus-additional-cni-plugins-jd2lx\" (UID: \"2617686e-5f7f-40a4-9654-fee29bbd1d71\") " pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.130194 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.142900 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.149439 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-n59sl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.160426 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: W0214 13:52:31.161153 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7475461f_e0e5_4d5e_91fd_bfe8fb575146.slice/crio-142751851d40018eed28ca52dd0df9984f2af860fd6c850a21eb872d89e0c048 WatchSource:0}: Error finding container 142751851d40018eed28ca52dd0df9984f2af860fd6c850a21eb872d89e0c048: Status 404 returned error can't find the container with id 142751851d40018eed28ca52dd0df9984f2af860fd6c850a21eb872d89e0c048 Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.172916 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.179062 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: W0214 13:52:31.194345 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2617686e_5f7f_40a4_9654_fee29bbd1d71.slice/crio-c75e35eb7f78540dd4c82eb5358d90ee4e927bc8171a482b63c7b4a19da93451 WatchSource:0}: Error finding container c75e35eb7f78540dd4c82eb5358d90ee4e927bc8171a482b63c7b4a19da93451: Status 404 returned error can't find the container with id c75e35eb7f78540dd4c82eb5358d90ee4e927bc8171a482b63c7b4a19da93451 Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.204780 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.237314 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.297178 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.320809 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4ct5"] Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.322180 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.326731 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.326774 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.327078 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.334883 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.335083 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.335220 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.335325 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.335435 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.371495 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.389527 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413066 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413174 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-netns\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413201 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-var-lib-openvswitch\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413222 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovn-node-metrics-cert\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413238 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-netd\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413253 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-log-socket\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413268 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-kubelet\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413287 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-slash\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413302 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-ovn\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413329 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413287 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.413435 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:52:33.413416217 +0000 UTC m=+25.439405698 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413346 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-node-log\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413554 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-script-lib\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.413574 4750 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413584 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-systemd-units\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413603 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-ovn-kubernetes\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.413610 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:33.413603572 +0000 UTC m=+25.439593053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413622 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97rqm\" (UniqueName: \"kubernetes.io/projected/06beb41c-7a86-45c1-85c2-c4f9543961ea-kube-api-access-97rqm\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413669 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-bin\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413706 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-systemd\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413720 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-env-overrides\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413741 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413756 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-etc-openvswitch\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413773 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-openvswitch\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413799 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-config\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.413824 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.413914 4750 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.413939 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:33.413932741 +0000 UTC m=+25.439922222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.429645 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.445078 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.459866 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.476520 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.490979 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.504411 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.514835 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-ovn\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.514897 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-node-log\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.514925 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-systemd-units\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.514953 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-ovn-kubernetes\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.514988 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-script-lib\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515015 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97rqm\" (UniqueName: \"kubernetes.io/projected/06beb41c-7a86-45c1-85c2-c4f9543961ea-kube-api-access-97rqm\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515001 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-ovn\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515245 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-systemd-units\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515038 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-bin\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515331 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-bin\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515327 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-ovn-kubernetes\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515436 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-systemd\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515465 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-systemd\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515476 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-env-overrides\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515537 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-etc-openvswitch\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515563 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-openvswitch\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515585 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-etc-openvswitch\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515634 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-config\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515647 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-openvswitch\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515672 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515711 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515748 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-netns\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515783 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-var-lib-openvswitch\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515791 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515820 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovn-node-metrics-cert\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515848 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-netns\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515849 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-netd\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515889 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-netd\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515903 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-kubelet\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.516536 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-slash\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.516565 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-log-socket\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515999 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-node-log\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.515992 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.516691 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.516689 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-env-overrides\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.516734 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-slash\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515949 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-var-lib-openvswitch\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.516707 4750 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.516810 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-config\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.516769 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-log-socket\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.515927 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-kubelet\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.516851 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:33.516826565 +0000 UTC m=+25.542816246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.517090 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-script-lib\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.524164 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.537713 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.554906 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.563276 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovn-node-metrics-cert\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.563559 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97rqm\" (UniqueName: \"kubernetes.io/projected/06beb41c-7a86-45c1-85c2-c4f9543961ea-kube-api-access-97rqm\") pod \"ovnkube-node-n4ct5\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.586023 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.602918 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.617898 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.618191 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.618477 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.618515 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.618533 4750 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.618607 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:33.618581718 +0000 UTC m=+25.644571199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.639162 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:31 crc kubenswrapper[4750]: W0214 13:52:31.653486 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06beb41c_7a86_45c1_85c2_c4f9543961ea.slice/crio-d5c18f894bba462ec6f9069917f8c96e3a8d3aa4c33293322449fdf84886c9ba WatchSource:0}: Error finding container d5c18f894bba462ec6f9069917f8c96e3a8d3aa4c33293322449fdf84886c9ba: Status 404 returned error can't find the container with id d5c18f894bba462ec6f9069917f8c96e3a8d3aa4c33293322449fdf84886c9ba Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.741431 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.741557 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.741624 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.741677 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.741892 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:31 crc kubenswrapper[4750]: E0214 13:52:31.742054 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.774433 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:31:40.13424252 +0000 UTC Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.960403 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-78xwc" event={"ID":"3a555a0c-f608-450a-b6aa-28dedd5b5e34","Type":"ContainerStarted","Data":"6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61"} Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.970045 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691" exitCode=0 Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.970163 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691"} Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.970288 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"d5c18f894bba462ec6f9069917f8c96e3a8d3aa4c33293322449fdf84886c9ba"} Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.971991 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n59sl" event={"ID":"7475461f-e0e5-4d5e-91fd-bfe8fb575146","Type":"ContainerStarted","Data":"08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202"} Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.972031 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n59sl" event={"ID":"7475461f-e0e5-4d5e-91fd-bfe8fb575146","Type":"ContainerStarted","Data":"142751851d40018eed28ca52dd0df9984f2af860fd6c850a21eb872d89e0c048"} Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.974842 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2"} Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.974870 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b"} Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.977286 4750 generic.go:334] "Generic (PLEG): container finished" podID="2617686e-5f7f-40a4-9654-fee29bbd1d71" containerID="4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43" exitCode=0 Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.978244 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" event={"ID":"2617686e-5f7f-40a4-9654-fee29bbd1d71","Type":"ContainerDied","Data":"4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43"} Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.978288 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" event={"ID":"2617686e-5f7f-40a4-9654-fee29bbd1d71","Type":"ContainerStarted","Data":"c75e35eb7f78540dd4c82eb5358d90ee4e927bc8171a482b63c7b4a19da93451"} Feb 14 13:52:31 crc kubenswrapper[4750]: I0214 13:52:31.987295 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:31Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.022479 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.039658 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.055542 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.070986 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.088241 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.102484 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.114507 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.128770 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.141616 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.157490 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.184151 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.204223 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.219595 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.281449 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.298935 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.309821 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.335574 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.370288 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.390206 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.416701 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.432015 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.443341 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.455973 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.468844 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-78wgq"] Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.469320 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.471770 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.471997 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.472041 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.472213 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.494044 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.523378 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.565189 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.585944 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqrj6\" (UniqueName: \"kubernetes.io/projected/b9d2bd01-539c-4980-8ff6-46efd6a51f43-kube-api-access-nqrj6\") pod \"node-ca-78wgq\" (UID: \"b9d2bd01-539c-4980-8ff6-46efd6a51f43\") " pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.586021 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9d2bd01-539c-4980-8ff6-46efd6a51f43-host\") pod \"node-ca-78wgq\" (UID: \"b9d2bd01-539c-4980-8ff6-46efd6a51f43\") " pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.586046 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b9d2bd01-539c-4980-8ff6-46efd6a51f43-serviceca\") pod \"node-ca-78wgq\" (UID: \"b9d2bd01-539c-4980-8ff6-46efd6a51f43\") " pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.602709 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.643972 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.683529 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.687040 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b9d2bd01-539c-4980-8ff6-46efd6a51f43-serviceca\") pod \"node-ca-78wgq\" (UID: \"b9d2bd01-539c-4980-8ff6-46efd6a51f43\") " pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.687137 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqrj6\" (UniqueName: \"kubernetes.io/projected/b9d2bd01-539c-4980-8ff6-46efd6a51f43-kube-api-access-nqrj6\") pod \"node-ca-78wgq\" (UID: \"b9d2bd01-539c-4980-8ff6-46efd6a51f43\") " pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.687206 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9d2bd01-539c-4980-8ff6-46efd6a51f43-host\") pod \"node-ca-78wgq\" (UID: \"b9d2bd01-539c-4980-8ff6-46efd6a51f43\") " pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.687277 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9d2bd01-539c-4980-8ff6-46efd6a51f43-host\") pod \"node-ca-78wgq\" (UID: \"b9d2bd01-539c-4980-8ff6-46efd6a51f43\") " pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.688663 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b9d2bd01-539c-4980-8ff6-46efd6a51f43-serviceca\") pod \"node-ca-78wgq\" (UID: \"b9d2bd01-539c-4980-8ff6-46efd6a51f43\") " pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.730830 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqrj6\" (UniqueName: \"kubernetes.io/projected/b9d2bd01-539c-4980-8ff6-46efd6a51f43-kube-api-access-nqrj6\") pod \"node-ca-78wgq\" (UID: \"b9d2bd01-539c-4980-8ff6-46efd6a51f43\") " pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.744866 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.775413 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:39:33.141879649 +0000 UTC Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.795309 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.826948 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.865299 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.911611 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.945265 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.985061 4750 generic.go:334] "Generic (PLEG): container finished" podID="2617686e-5f7f-40a4-9654-fee29bbd1d71" containerID="487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee" exitCode=0 Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.985145 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" event={"ID":"2617686e-5f7f-40a4-9654-fee29bbd1d71","Type":"ContainerDied","Data":"487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee"} Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.986745 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:32Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.990329 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3"} Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.990524 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317"} Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.990569 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa"} Feb 14 13:52:32 crc kubenswrapper[4750]: I0214 13:52:32.990591 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88"} Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.029676 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.036328 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-78wgq" Feb 14 13:52:33 crc kubenswrapper[4750]: W0214 13:52:33.059586 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d2bd01_539c_4980_8ff6_46efd6a51f43.slice/crio-4b16570660d7f385cdb7ab79231e3985dbbe175d79f377f134a16c31c9d7fbc3 WatchSource:0}: Error finding container 4b16570660d7f385cdb7ab79231e3985dbbe175d79f377f134a16c31c9d7fbc3: Status 404 returned error can't find the container with id 4b16570660d7f385cdb7ab79231e3985dbbe175d79f377f134a16c31c9d7fbc3 Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.069139 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.111888 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.145507 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.188694 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.230066 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.266262 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.307694 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.343286 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.383909 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.424569 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.463813 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.494918 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.495092 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.495177 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.495289 4750 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.495314 4750 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.495362 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:37.495344323 +0000 UTC m=+29.521333804 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.495437 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:37.495403014 +0000 UTC m=+29.521392725 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.495538 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:52:37.495523397 +0000 UTC m=+29.521513108 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.503395 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:33Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.596148 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.596375 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.596407 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.596425 4750 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.596505 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:37.596481619 +0000 UTC m=+29.622471100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.697346 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.697534 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.697558 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.697574 4750 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.697644 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:37.697622315 +0000 UTC m=+29.723611796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.741312 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.741464 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.741503 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.741578 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.741584 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:33 crc kubenswrapper[4750]: E0214 13:52:33.741744 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:33 crc kubenswrapper[4750]: I0214 13:52:33.776621 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:00:29.416939179 +0000 UTC Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.000040 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90"} Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.000144 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d"} Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.001726 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-78wgq" event={"ID":"b9d2bd01-539c-4980-8ff6-46efd6a51f43","Type":"ContainerStarted","Data":"d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af"} Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.001772 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-78wgq" event={"ID":"b9d2bd01-539c-4980-8ff6-46efd6a51f43","Type":"ContainerStarted","Data":"4b16570660d7f385cdb7ab79231e3985dbbe175d79f377f134a16c31c9d7fbc3"} Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.003998 4750 generic.go:334] "Generic (PLEG): container finished" podID="2617686e-5f7f-40a4-9654-fee29bbd1d71" containerID="dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e" exitCode=0 Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.004043 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" event={"ID":"2617686e-5f7f-40a4-9654-fee29bbd1d71","Type":"ContainerDied","Data":"dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e"} Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.006394 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9"} Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.017277 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.039578 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.065728 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.082510 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.098039 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.114021 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.128217 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.142615 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.160930 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.181068 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.197573 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.212471 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.223373 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.236234 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.247613 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.262752 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.278870 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.294753 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.308169 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.323691 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.346393 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.383311 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.423007 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.464739 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.490791 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.494260 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.506200 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.566588 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.581102 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.607561 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.644378 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.684489 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.731230 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.767598 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.777244 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:38:15.442287692 +0000 UTC Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.809846 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.847427 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.885989 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.925889 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:34 crc kubenswrapper[4750]: I0214 13:52:34.965378 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.004768 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.013652 4750 generic.go:334] "Generic (PLEG): container finished" podID="2617686e-5f7f-40a4-9654-fee29bbd1d71" containerID="4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334" exitCode=0 Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.013735 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" event={"ID":"2617686e-5f7f-40a4-9654-fee29bbd1d71","Type":"ContainerDied","Data":"4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334"} Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.047192 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.085963 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.126534 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.163517 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.205567 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.249330 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.286138 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.326674 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.369081 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.404835 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.445922 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.485612 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.526656 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.565939 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.602899 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.646275 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.667824 4750 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.670141 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.670191 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.670202 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.670366 4750 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.684670 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.716203 4750 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.716573 4750 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.717825 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.717880 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.717894 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.717915 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.717930 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:35Z","lastTransitionTime":"2026-02-14T13:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:35 crc kubenswrapper[4750]: E0214 13:52:35.736554 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.740717 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.740818 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:35 crc kubenswrapper[4750]: E0214 13:52:35.740855 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.740909 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:35 crc kubenswrapper[4750]: E0214 13:52:35.741031 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.741253 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.741282 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.741292 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.741306 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.741316 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:35Z","lastTransitionTime":"2026-02-14T13:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:35 crc kubenswrapper[4750]: E0214 13:52:35.741367 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:35 crc kubenswrapper[4750]: E0214 13:52:35.758168 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.764129 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.764190 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.764206 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.764229 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.764244 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:35Z","lastTransitionTime":"2026-02-14T13:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.783263 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:36:47.053101289 +0000 UTC Feb 14 13:52:35 crc kubenswrapper[4750]: E0214 13:52:35.800592 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.805789 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.805847 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.805869 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.805903 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.805921 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:35Z","lastTransitionTime":"2026-02-14T13:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:35 crc kubenswrapper[4750]: E0214 13:52:35.825897 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.830682 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.830725 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.830734 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.830751 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.830764 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:35Z","lastTransitionTime":"2026-02-14T13:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:35 crc kubenswrapper[4750]: E0214 13:52:35.856408 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:35 crc kubenswrapper[4750]: E0214 13:52:35.856541 4750 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.858448 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.858499 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.858512 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.858537 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.858550 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:35Z","lastTransitionTime":"2026-02-14T13:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.961795 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.961868 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.961886 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.961917 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:35 crc kubenswrapper[4750]: I0214 13:52:35.961943 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:35Z","lastTransitionTime":"2026-02-14T13:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.020632 4750 generic.go:334] "Generic (PLEG): container finished" podID="2617686e-5f7f-40a4-9654-fee29bbd1d71" containerID="ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866" exitCode=0 Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.020707 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" event={"ID":"2617686e-5f7f-40a4-9654-fee29bbd1d71","Type":"ContainerDied","Data":"ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.027473 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.041737 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.061992 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.064476 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.064531 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.064643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.064724 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.064746 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:36Z","lastTransitionTime":"2026-02-14T13:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.091352 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.111563 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.125923 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.139499 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.156390 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.167726 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.167764 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.167775 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.167793 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.167806 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:36Z","lastTransitionTime":"2026-02-14T13:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.169783 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.182438 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.197227 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.211238 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.227075 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.242942 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.270291 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.270373 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.270394 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.270422 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.270437 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:36Z","lastTransitionTime":"2026-02-14T13:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.282504 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.373515 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.373582 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.373604 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.373633 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.373653 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:36Z","lastTransitionTime":"2026-02-14T13:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.477318 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.477411 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.477434 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.477472 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.477498 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:36Z","lastTransitionTime":"2026-02-14T13:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.593007 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.593078 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.593091 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.593133 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.593162 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:36Z","lastTransitionTime":"2026-02-14T13:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.695903 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.695968 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.695990 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.696015 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.696032 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:36Z","lastTransitionTime":"2026-02-14T13:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.783446 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:39:42.584797304 +0000 UTC Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.799251 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.799293 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.799303 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.799320 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.799335 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:36Z","lastTransitionTime":"2026-02-14T13:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.901905 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.901950 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.901961 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.901978 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:36 crc kubenswrapper[4750]: I0214 13:52:36.901989 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:36Z","lastTransitionTime":"2026-02-14T13:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.004749 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.004818 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.004833 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.004854 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.004868 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.035477 4750 generic.go:334] "Generic (PLEG): container finished" podID="2617686e-5f7f-40a4-9654-fee29bbd1d71" containerID="a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0" exitCode=0 Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.035546 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" event={"ID":"2617686e-5f7f-40a4-9654-fee29bbd1d71","Type":"ContainerDied","Data":"a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.061546 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.078945 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.100288 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.109547 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.109686 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.109754 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.109820 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.109881 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.119499 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.136227 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.153892 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.170384 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.191240 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.205767 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.221941 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.221994 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.222008 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.222028 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.222040 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.239597 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.255981 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.278483 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.293290 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.311648 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.324917 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.325142 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.325225 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.325312 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.325377 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.428228 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.428537 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.428621 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.428701 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.428773 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.530700 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.530841 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.530904 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.530982 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.531050 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.534734 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.534884 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:52:45.534850891 +0000 UTC m=+37.560840372 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.534946 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.535075 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.535259 4750 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.535295 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:45.535288543 +0000 UTC m=+37.561278024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.535535 4750 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.535697 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:45.535672394 +0000 UTC m=+37.561662035 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.633705 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.633790 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.633811 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.633848 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.633871 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.636290 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.636464 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.636491 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.636506 4750 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.636610 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:45.636555633 +0000 UTC m=+37.662545114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.737600 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.737712 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.737862 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.737938 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.738007 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.738032 4750 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.737950 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.738139 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.738145 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:45.73809306 +0000 UTC m=+37.764082711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.738157 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.741447 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.741624 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.741838 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.742319 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.742587 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:37 crc kubenswrapper[4750]: E0214 13:52:37.743243 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.784472 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:23:08.095933898 +0000 UTC Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.840843 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.840873 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.840882 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.840898 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.840910 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.943752 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.943802 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.943818 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.943843 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:37 crc kubenswrapper[4750]: I0214 13:52:37.943861 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:37Z","lastTransitionTime":"2026-02-14T13:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.045521 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.045570 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.045581 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.045609 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.045629 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.047528 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.048042 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.048092 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.052722 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" event={"ID":"2617686e-5f7f-40a4-9654-fee29bbd1d71","Type":"ContainerStarted","Data":"458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.067308 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.088142 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.100153 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.101107 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.110644 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.126693 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.140650 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.148986 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.149022 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.149031 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.149051 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.149063 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.157095 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.181526 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.195922 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.215745 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.231966 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.244929 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.256710 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.256787 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.256817 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.256851 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.256879 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.263764 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.276817 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.285963 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.299646 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.312860 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.327103 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.341589 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.352885 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.358969 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.359031 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.359047 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.359070 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.359088 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.372530 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.390522 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.407145 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.420009 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.432730 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.442299 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.451414 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.461793 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.461849 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.461860 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.461878 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.461891 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.467040 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.484424 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.493365 4750 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.564187 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.564269 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.564295 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.564328 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.564354 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.666748 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.666805 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.666821 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.666844 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.666860 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.763377 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.769488 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.769547 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.769559 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.769576 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.769607 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.778735 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.784745 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:21:25.460899176 +0000 UTC Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.802712 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.817727 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.835597 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.859033 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.871350 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.871596 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.871681 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.871794 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.871878 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.875398 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.897461 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.914459 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.948380 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.962805 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.975145 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.975208 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.975223 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.975249 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.975267 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:38Z","lastTransitionTime":"2026-02-14T13:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:38 crc kubenswrapper[4750]: I0214 13:52:38.981921 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.000935 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.018180 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:39Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.057446 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.077962 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.078079 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.078189 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.078288 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.078383 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:39Z","lastTransitionTime":"2026-02-14T13:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.181474 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.181533 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.181590 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.181619 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.181640 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:39Z","lastTransitionTime":"2026-02-14T13:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.284865 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.284941 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.284958 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.284982 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.284996 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:39Z","lastTransitionTime":"2026-02-14T13:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.388416 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.388471 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.388484 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.388503 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.388518 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:39Z","lastTransitionTime":"2026-02-14T13:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.491590 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.491646 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.491658 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.491684 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.491698 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:39Z","lastTransitionTime":"2026-02-14T13:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.601585 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.601710 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.602837 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.602916 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.602941 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:39Z","lastTransitionTime":"2026-02-14T13:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.705808 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.705850 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.705862 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.705880 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.705892 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:39Z","lastTransitionTime":"2026-02-14T13:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.741167 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.741292 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:39 crc kubenswrapper[4750]: E0214 13:52:39.741596 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:39 crc kubenswrapper[4750]: E0214 13:52:39.751523 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.752209 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:39 crc kubenswrapper[4750]: E0214 13:52:39.752348 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.785541 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:23:17.904844306 +0000 UTC Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.812975 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.813025 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.813041 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.813068 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.813082 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:39Z","lastTransitionTime":"2026-02-14T13:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.916283 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.916544 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.916565 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.916591 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:39 crc kubenswrapper[4750]: I0214 13:52:39.916608 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:39Z","lastTransitionTime":"2026-02-14T13:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.019880 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.020207 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.020313 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.020429 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.020498 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.060399 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.122605 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.122937 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.123016 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.123107 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.123218 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.225079 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.225107 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.225170 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.225186 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.225195 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.327446 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.327504 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.327516 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.327538 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.327555 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.430903 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.431231 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.431362 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.431454 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.431529 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.534036 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.534074 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.534088 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.534105 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.534139 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.636764 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.637054 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.637155 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.637249 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.637566 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.740673 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.740722 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.740742 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.740762 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.740779 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.787355 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:42:44.075771538 +0000 UTC Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.844272 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.844329 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.844342 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.844365 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.844380 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.947365 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.947449 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.947470 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.947498 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:40 crc kubenswrapper[4750]: I0214 13:52:40.947517 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:40Z","lastTransitionTime":"2026-02-14T13:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.050751 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.050822 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.050838 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.050865 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.050883 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:41Z","lastTransitionTime":"2026-02-14T13:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.067977 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/0.log" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.073373 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c" exitCode=1 Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.073458 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.075429 4750 scope.go:117] "RemoveContainer" containerID="cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.095863 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.122324 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.155093 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.155149 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.155163 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.155181 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.155195 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:41Z","lastTransitionTime":"2026-02-14T13:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.157265 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:40Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 13:52:40.820294 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:52:40.820342 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:52:40.820378 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:52:40.820388 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:52:40.820409 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:52:40.820425 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:52:40.820446 6048 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 13:52:40.820511 6048 factory.go:656] Stopping watch factory\\\\nI0214 13:52:40.820539 6048 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:52:40.820578 6048 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:52:40.820619 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:52:40.820631 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:52:40.820644 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:52:40.820657 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:52:40.820669 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:52:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.178394 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.196362 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.213779 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.230797 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.247795 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.258042 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.258079 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.258088 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.258103 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.258126 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:41Z","lastTransitionTime":"2026-02-14T13:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.260984 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.279582 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.298457 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.312831 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.326646 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.338833 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:41Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.360664 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.360690 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.360701 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.360717 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.360731 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:41Z","lastTransitionTime":"2026-02-14T13:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.465339 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.465384 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.465396 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.465413 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.465423 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:41Z","lastTransitionTime":"2026-02-14T13:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.567824 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.567863 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.567874 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.567889 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.567899 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:41Z","lastTransitionTime":"2026-02-14T13:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.717829 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.717869 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.717882 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.717898 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.717910 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:41Z","lastTransitionTime":"2026-02-14T13:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.741089 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:41 crc kubenswrapper[4750]: E0214 13:52:41.741257 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.741702 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:41 crc kubenswrapper[4750]: E0214 13:52:41.741765 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.741813 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:41 crc kubenswrapper[4750]: E0214 13:52:41.741861 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.788531 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:23:33.127805487 +0000 UTC Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.820409 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.820439 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.820447 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.820460 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.820468 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:41Z","lastTransitionTime":"2026-02-14T13:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.922459 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.922488 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.922497 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.922511 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:41 crc kubenswrapper[4750]: I0214 13:52:41.922520 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:41Z","lastTransitionTime":"2026-02-14T13:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.024705 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.024984 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.025053 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.025129 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.025187 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.080619 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/0.log" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.084605 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.084786 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.101071 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.117296 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.129082 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.129634 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.129660 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.129691 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.129713 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.134666 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.149779 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.164855 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.182214 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.202190 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.233028 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.233105 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.233165 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.233195 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.233214 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.235558 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.271100 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.287919 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.302955 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.316793 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.335558 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.336912 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.337004 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.337061 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.337136 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.337221 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.358895 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:40Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 13:52:40.820294 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:52:40.820342 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:52:40.820378 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:52:40.820388 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:52:40.820409 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:52:40.820425 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:52:40.820446 6048 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 13:52:40.820511 6048 factory.go:656] Stopping watch factory\\\\nI0214 13:52:40.820539 6048 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:52:40.820578 6048 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:52:40.820619 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:52:40.820631 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:52:40.820644 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:52:40.820657 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:52:40.820669 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:52:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.440356 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.440394 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.440406 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.440426 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.440438 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.544555 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.544624 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.544650 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.544709 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.544737 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.653025 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.653575 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.653761 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.653927 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.654091 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.685041 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6"] Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.685785 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.688668 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.691907 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.713590 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.727301 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/011928a7-1832-44dc-acf7-7b54adbd2108-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.727369 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/011928a7-1832-44dc-acf7-7b54adbd2108-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.727408 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrdz\" (UniqueName: \"kubernetes.io/projected/011928a7-1832-44dc-acf7-7b54adbd2108-kube-api-access-wcrdz\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.727624 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/011928a7-1832-44dc-acf7-7b54adbd2108-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.733827 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.753458 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.757037 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.757196 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.757305 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.757366 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.757429 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.766860 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.785026 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.788715 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:18:07.57077865 +0000 UTC Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.804780 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.820891 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.828975 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/011928a7-1832-44dc-acf7-7b54adbd2108-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.829057 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/011928a7-1832-44dc-acf7-7b54adbd2108-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.829085 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/011928a7-1832-44dc-acf7-7b54adbd2108-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.829106 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrdz\" (UniqueName: \"kubernetes.io/projected/011928a7-1832-44dc-acf7-7b54adbd2108-kube-api-access-wcrdz\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.830213 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/011928a7-1832-44dc-acf7-7b54adbd2108-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.830876 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/011928a7-1832-44dc-acf7-7b54adbd2108-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.837084 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.839816 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/011928a7-1832-44dc-acf7-7b54adbd2108-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.854831 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.858615 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrdz\" (UniqueName: \"kubernetes.io/projected/011928a7-1832-44dc-acf7-7b54adbd2108-kube-api-access-wcrdz\") pod \"ovnkube-control-plane-749d76644c-bf5d6\" (UID: \"011928a7-1832-44dc-acf7-7b54adbd2108\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.865709 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.865758 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.865770 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.865788 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.865804 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.869085 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.890352 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.904084 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.921561 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.946559 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:40Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 13:52:40.820294 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:52:40.820342 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:52:40.820378 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:52:40.820388 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:52:40.820409 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:52:40.820425 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:52:40.820446 6048 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 13:52:40.820511 6048 factory.go:656] Stopping watch factory\\\\nI0214 13:52:40.820539 6048 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:52:40.820578 6048 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:52:40.820619 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:52:40.820631 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:52:40.820644 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:52:40.820657 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:52:40.820669 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:52:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.959528 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:42Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.969435 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.969490 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.969508 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.969534 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:42 crc kubenswrapper[4750]: I0214 13:52:42.969551 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:42Z","lastTransitionTime":"2026-02-14T13:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.013180 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" Feb 14 13:52:43 crc kubenswrapper[4750]: W0214 13:52:43.033605 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod011928a7_1832_44dc_acf7_7b54adbd2108.slice/crio-467e0d7e280f632076a1b5d09bc7658add9600858251961891f30b15cca7ce4c WatchSource:0}: Error finding container 467e0d7e280f632076a1b5d09bc7658add9600858251961891f30b15cca7ce4c: Status 404 returned error can't find the container with id 467e0d7e280f632076a1b5d09bc7658add9600858251961891f30b15cca7ce4c Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.074914 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.074974 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.074997 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.075020 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.075037 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:43Z","lastTransitionTime":"2026-02-14T13:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.096075 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" event={"ID":"011928a7-1832-44dc-acf7-7b54adbd2108","Type":"ContainerStarted","Data":"467e0d7e280f632076a1b5d09bc7658add9600858251961891f30b15cca7ce4c"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.098666 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/1.log" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.099206 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/0.log" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.102424 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22" exitCode=1 Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.102478 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.102555 4750 scope.go:117] "RemoveContainer" containerID="cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.103425 4750 scope.go:117] "RemoveContainer" containerID="f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22" Feb 14 13:52:43 crc kubenswrapper[4750]: E0214 13:52:43.103616 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.122268 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.136826 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.154643 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.167743 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.178196 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.178263 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.178282 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.178311 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.178329 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:43Z","lastTransitionTime":"2026-02-14T13:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.190381 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.207749 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.225506 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.240635 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.253075 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.274055 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.280003 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.280032 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.280044 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.280064 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.280077 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:43Z","lastTransitionTime":"2026-02-14T13:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.288044 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.310367 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:40Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 13:52:40.820294 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:52:40.820342 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:52:40.820378 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:52:40.820388 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:52:40.820409 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:52:40.820425 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:52:40.820446 6048 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 13:52:40.820511 6048 factory.go:656] Stopping watch factory\\\\nI0214 13:52:40.820539 6048 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:52:40.820578 6048 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:52:40.820619 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:52:40.820631 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:52:40.820644 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:52:40.820657 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:52:40.820669 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:52:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.325266 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.337424 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.353079 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:43Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.382670 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.382717 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.382730 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.382750 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.382763 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:43Z","lastTransitionTime":"2026-02-14T13:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.485406 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.485454 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.485473 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.485498 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.485516 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:43Z","lastTransitionTime":"2026-02-14T13:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.589369 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.589486 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.589506 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.589531 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.589553 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:43Z","lastTransitionTime":"2026-02-14T13:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.694203 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.694287 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.694317 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.694346 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.694366 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:43Z","lastTransitionTime":"2026-02-14T13:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.741825 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.741825 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:43 crc kubenswrapper[4750]: E0214 13:52:43.742033 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.741858 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:43 crc kubenswrapper[4750]: E0214 13:52:43.742133 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:43 crc kubenswrapper[4750]: E0214 13:52:43.742198 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.789440 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:46:17.131447868 +0000 UTC Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.797736 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.797796 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.797819 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.797852 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.797871 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:43Z","lastTransitionTime":"2026-02-14T13:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.900645 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.900693 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.900706 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.900721 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:43 crc kubenswrapper[4750]: I0214 13:52:43.900732 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:43Z","lastTransitionTime":"2026-02-14T13:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.003806 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.003854 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.003865 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.003882 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.003893 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.106029 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.106093 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.106141 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.106165 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.106180 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.109059 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" event={"ID":"011928a7-1832-44dc-acf7-7b54adbd2108","Type":"ContainerStarted","Data":"40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.109168 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" event={"ID":"011928a7-1832-44dc-acf7-7b54adbd2108","Type":"ContainerStarted","Data":"ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.113283 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/1.log" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.135386 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.162082 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.181438 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.204563 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.208935 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.208983 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.209001 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.209027 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.209043 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.223071 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.247348 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.262065 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.277043 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.292099 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.307458 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.311623 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.311693 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.311727 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.311751 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.311764 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.330905 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.355423 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:40Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 13:52:40.820294 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:52:40.820342 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:52:40.820378 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:52:40.820388 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:52:40.820409 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:52:40.820425 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:52:40.820446 6048 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 13:52:40.820511 6048 factory.go:656] Stopping watch factory\\\\nI0214 13:52:40.820539 6048 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:52:40.820578 6048 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:52:40.820619 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:52:40.820631 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:52:40.820644 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:52:40.820657 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:52:40.820669 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:52:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.368312 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.381456 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.394721 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.415017 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.415074 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.415093 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.415138 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.415156 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.517625 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.517710 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.517738 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.517772 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.517793 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.621873 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.621986 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.622019 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.622054 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.622079 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.730333 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.732194 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.732252 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.732284 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.732303 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.790342 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:12:30.175967664 +0000 UTC Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.835079 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.835195 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.835221 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.835252 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.835275 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.855381 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.872320 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.888867 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.903338 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.922439 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.938755 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.938821 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.938843 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.938872 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.938900 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:44Z","lastTransitionTime":"2026-02-14T13:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.939239 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.960472 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.982699 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:44Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.997892 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-l6hd4"] Feb 14 13:52:44 crc kubenswrapper[4750]: I0214 13:52:44.998902 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:44 crc kubenswrapper[4750]: E0214 13:52:44.999062 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.003545 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.021787 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.076269 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2zg\" (UniqueName: \"kubernetes.io/projected/29305ecd-7a38-4ed0-b02e-b391e5487699-kube-api-access-6z2zg\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.076365 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.077079 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.077179 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.077206 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.077239 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.077264 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.089899 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.124131 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:40Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 13:52:40.820294 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:52:40.820342 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:52:40.820378 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:52:40.820388 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:52:40.820409 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:52:40.820425 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:52:40.820446 6048 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 13:52:40.820511 6048 factory.go:656] Stopping watch factory\\\\nI0214 13:52:40.820539 6048 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:52:40.820578 6048 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:52:40.820619 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:52:40.820631 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:52:40.820644 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:52:40.820657 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:52:40.820669 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:52:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.141302 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.155860 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.177702 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2zg\" (UniqueName: \"kubernetes.io/projected/29305ecd-7a38-4ed0-b02e-b391e5487699-kube-api-access-6z2zg\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.177844 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.178054 4750 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.178288 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs podName:29305ecd-7a38-4ed0-b02e-b391e5487699 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:45.678256783 +0000 UTC m=+37.704246304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs") pod "network-metrics-daemon-l6hd4" (UID: "29305ecd-7a38-4ed0-b02e-b391e5487699") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.179936 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.180646 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.180706 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.180725 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.180752 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.180782 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.203228 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.216372 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2zg\" (UniqueName: \"kubernetes.io/projected/29305ecd-7a38-4ed0-b02e-b391e5487699-kube-api-access-6z2zg\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.222665 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.239845 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.261301 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.283408 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.283464 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.283484 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.283512 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.283530 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.283704 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.301015 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.314133 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.338473 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.368141 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc57a71953b1980720dfe100f451a0aa728450b0d4278b50a87eddcfddf31f7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:40Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 13:52:40.820294 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:52:40.820342 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:52:40.820378 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:52:40.820388 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:52:40.820409 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:52:40.820425 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:52:40.820446 6048 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 13:52:40.820511 6048 factory.go:656] Stopping watch factory\\\\nI0214 13:52:40.820539 6048 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:52:40.820578 6048 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:52:40.820619 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:52:40.820631 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:52:40.820644 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:52:40.820657 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:52:40.820669 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:52:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.386259 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.386315 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.386327 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.386346 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.386358 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.387192 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.408973 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.423854 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.443331 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.456938 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.471283 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.483406 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.488572 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.488625 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.488634 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.488649 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.488677 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.496905 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.582873 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.583011 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.583130 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:53:01.583072005 +0000 UTC m=+53.609061486 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.583150 4750 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.583205 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.583242 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:01.58322464 +0000 UTC m=+53.609214111 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.583386 4750 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.583443 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:01.583435235 +0000 UTC m=+53.609424716 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.590874 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.590916 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.590926 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.590944 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.590958 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.683958 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.684019 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.684152 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.684177 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.684189 4750 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.684235 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:01.684221192 +0000 UTC m=+53.710210673 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.684146 4750 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.684276 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs podName:29305ecd-7a38-4ed0-b02e-b391e5487699 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:46.684265753 +0000 UTC m=+38.710255234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs") pod "network-metrics-daemon-l6hd4" (UID: "29305ecd-7a38-4ed0-b02e-b391e5487699") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.693100 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.693165 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.693184 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.693204 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.693215 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.741345 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.741479 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.741524 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.741345 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.741686 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.742101 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.785037 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.785326 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.785382 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.785405 4750 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:45 crc kubenswrapper[4750]: E0214 13:52:45.785490 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:01.785464051 +0000 UTC m=+53.811453572 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.790516 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:15:24.866264597 +0000 UTC Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.796472 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.796530 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.796549 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.796575 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.796597 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.899496 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.899557 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.899575 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.899599 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.899617 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.975692 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.975746 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.975765 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.975795 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:45 crc kubenswrapper[4750]: I0214 13:52:45.975820 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:45Z","lastTransitionTime":"2026-02-14T13:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: E0214 13:52:46.000954 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:45Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.007010 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.007072 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.007086 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.007133 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.007150 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: E0214 13:52:46.026958 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:46Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.032596 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.032741 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.032769 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.032795 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.032817 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: E0214 13:52:46.054231 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:46Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.060256 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.060355 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.060382 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.060463 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.060597 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: E0214 13:52:46.081811 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:46Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.087483 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.087554 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.087573 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.087602 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.087623 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: E0214 13:52:46.110478 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:46Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:46 crc kubenswrapper[4750]: E0214 13:52:46.110746 4750 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.114179 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.114233 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.114252 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.114278 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.114294 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.217271 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.217357 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.217382 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.217414 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.217437 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.320746 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.320826 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.320849 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.320883 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.320909 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.424816 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.424899 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.424934 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.424966 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.424989 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.528516 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.528584 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.528601 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.528705 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.528764 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.632465 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.632538 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.632558 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.632591 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.632611 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.696875 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:46 crc kubenswrapper[4750]: E0214 13:52:46.697274 4750 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:46 crc kubenswrapper[4750]: E0214 13:52:46.697393 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs podName:29305ecd-7a38-4ed0-b02e-b391e5487699 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:48.697360783 +0000 UTC m=+40.723350294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs") pod "network-metrics-daemon-l6hd4" (UID: "29305ecd-7a38-4ed0-b02e-b391e5487699") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.735545 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.735587 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.735599 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.735617 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.735632 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.741345 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:46 crc kubenswrapper[4750]: E0214 13:52:46.741494 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.790841 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:50:03.010950575 +0000 UTC Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.838975 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.839042 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.839066 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.839098 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.839182 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.942919 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.942971 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.942985 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.943003 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:46 crc kubenswrapper[4750]: I0214 13:52:46.943017 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:46Z","lastTransitionTime":"2026-02-14T13:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.046020 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.046078 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.046094 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.046150 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.046170 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.149185 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.149251 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.149296 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.149323 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.149344 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.172746 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.174485 4750 scope.go:117] "RemoveContainer" containerID="f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22" Feb 14 13:52:47 crc kubenswrapper[4750]: E0214 13:52:47.174833 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.194272 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.217474 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.239694 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.253806 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.253872 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.253889 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.253918 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.253937 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.264199 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.287351 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.309470 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.328432 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.345906 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.358027 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.358101 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.358147 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.358177 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.358196 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.363921 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.380803 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.410055 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.429076 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.454461 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.461388 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.461476 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.461557 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.461913 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.461961 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.471333 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.487224 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.509852 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:47Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.565407 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.565495 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.565909 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.565998 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.566301 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.670059 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.670219 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.670249 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.670283 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.670304 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.741668 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.741746 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.741667 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:47 crc kubenswrapper[4750]: E0214 13:52:47.741915 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:47 crc kubenswrapper[4750]: E0214 13:52:47.742022 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:47 crc kubenswrapper[4750]: E0214 13:52:47.742188 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.774615 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.774679 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.774698 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.774721 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.774738 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.791778 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:45:51.922156402 +0000 UTC Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.878241 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.878317 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.878340 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.878370 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.878390 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.982188 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.982263 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.982282 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.982312 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:47 crc kubenswrapper[4750]: I0214 13:52:47.982333 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:47Z","lastTransitionTime":"2026-02-14T13:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.085562 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.085648 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.085674 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.085707 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.085729 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:48Z","lastTransitionTime":"2026-02-14T13:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.189562 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.189638 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.189663 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.189696 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.189721 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:48Z","lastTransitionTime":"2026-02-14T13:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.293086 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.293208 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.293228 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.293255 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.293276 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:48Z","lastTransitionTime":"2026-02-14T13:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.397146 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.397226 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.397245 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.397273 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.397292 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:48Z","lastTransitionTime":"2026-02-14T13:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.500857 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.500948 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.500979 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.501013 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.501039 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:48Z","lastTransitionTime":"2026-02-14T13:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.605383 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.605448 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.605470 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.605499 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.605518 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:48Z","lastTransitionTime":"2026-02-14T13:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.710536 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.710604 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.710622 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.710649 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.710670 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:48Z","lastTransitionTime":"2026-02-14T13:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.718466 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:48 crc kubenswrapper[4750]: E0214 13:52:48.718664 4750 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:48 crc kubenswrapper[4750]: E0214 13:52:48.718739 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs podName:29305ecd-7a38-4ed0-b02e-b391e5487699 nodeName:}" failed. No retries permitted until 2026-02-14 13:52:52.71871548 +0000 UTC m=+44.744705001 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs") pod "network-metrics-daemon-l6hd4" (UID: "29305ecd-7a38-4ed0-b02e-b391e5487699") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.742023 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:48 crc kubenswrapper[4750]: E0214 13:52:48.742357 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.767583 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.788996 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.792203 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:39:29.276665395 +0000 UTC Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.811525 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.814191 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.814248 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.814265 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.814293 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.814316 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:48Z","lastTransitionTime":"2026-02-14T13:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.830508 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.855379 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.873908 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.899924 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.917203 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.917303 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.917324 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.917352 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.917373 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:48Z","lastTransitionTime":"2026-02-14T13:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.937901 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.958208 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.977479 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:48 crc kubenswrapper[4750]: I0214 13:52:48.997528 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:48Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.018012 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:49Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.020596 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.020754 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.020779 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.020847 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.020914 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.041337 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:49Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.060056 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:49Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.080672 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:49Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.103888 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:49Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.124462 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.124532 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.124550 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.124578 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.124603 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.229466 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.229536 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.229556 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.229589 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.229792 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.347354 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.347415 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.347435 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.347462 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.347481 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.450382 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.450434 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.450449 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.450470 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.450485 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.553204 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.553268 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.553304 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.553340 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.553362 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.657300 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.657364 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.657384 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.657416 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.657440 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.741514 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.741607 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:49 crc kubenswrapper[4750]: E0214 13:52:49.741745 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:49 crc kubenswrapper[4750]: E0214 13:52:49.741947 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.742205 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:49 crc kubenswrapper[4750]: E0214 13:52:49.742519 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.760419 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.760507 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.760529 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.760554 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.760574 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.792739 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:02:10.263355717 +0000 UTC Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.864000 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.864579 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.864760 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.864916 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.865043 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.968713 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.968774 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.968794 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.968820 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:49 crc kubenswrapper[4750]: I0214 13:52:49.968837 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:49Z","lastTransitionTime":"2026-02-14T13:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.072882 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.072967 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.072993 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.073024 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.073052 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:50Z","lastTransitionTime":"2026-02-14T13:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.176755 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.176827 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.176853 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.176886 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.176908 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:50Z","lastTransitionTime":"2026-02-14T13:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.280370 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.280431 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.280448 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.280472 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.280491 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:50Z","lastTransitionTime":"2026-02-14T13:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.384638 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.384712 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.384731 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.384755 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.384773 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:50Z","lastTransitionTime":"2026-02-14T13:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.488960 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.489232 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.489257 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.489288 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.489309 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:50Z","lastTransitionTime":"2026-02-14T13:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.592576 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.592661 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.592689 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.592720 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.592744 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:50Z","lastTransitionTime":"2026-02-14T13:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.696694 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.696792 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.696816 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.696848 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.696875 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:50Z","lastTransitionTime":"2026-02-14T13:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.740903 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:50 crc kubenswrapper[4750]: E0214 13:52:50.741161 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.793716 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:19:17.578263976 +0000 UTC Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.800523 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.800591 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.800609 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.800644 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.800673 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:50Z","lastTransitionTime":"2026-02-14T13:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.904023 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.904081 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.904093 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.904133 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:50 crc kubenswrapper[4750]: I0214 13:52:50.904146 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:50Z","lastTransitionTime":"2026-02-14T13:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.007654 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.007710 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.007727 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.007753 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.007767 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.110731 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.110826 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.110844 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.110864 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.110884 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.213765 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.213846 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.213858 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.213880 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.213892 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.316434 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.316498 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.316517 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.316545 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.316561 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.419595 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.419664 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.419675 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.419697 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.419710 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.523259 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.523310 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.523319 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.523341 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.523352 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.626155 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.626220 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.626452 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.626488 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.626540 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.729986 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.730034 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.730045 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.730065 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.730078 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.741439 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:51 crc kubenswrapper[4750]: E0214 13:52:51.741649 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.741727 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:51 crc kubenswrapper[4750]: E0214 13:52:51.741852 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.741917 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:51 crc kubenswrapper[4750]: E0214 13:52:51.741990 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.794644 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:21:35.738181373 +0000 UTC Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.833194 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.833281 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.833294 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.833317 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.833332 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.945285 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.945357 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.945378 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.945409 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:51 crc kubenswrapper[4750]: I0214 13:52:51.945428 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:51Z","lastTransitionTime":"2026-02-14T13:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.048606 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.048683 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.048701 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.048725 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.048741 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.151728 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.152179 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.152401 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.152642 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.152816 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.256558 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.256627 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.256648 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.256675 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.256692 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.360808 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.360865 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.360884 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.360912 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.360929 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.464184 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.464268 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.464287 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.464315 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.464335 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.567087 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.567179 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.567199 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.567226 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.567245 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.670437 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.670509 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.670533 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.670566 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.670590 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.741184 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:52 crc kubenswrapper[4750]: E0214 13:52:52.741402 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.767960 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:52 crc kubenswrapper[4750]: E0214 13:52:52.768249 4750 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:52 crc kubenswrapper[4750]: E0214 13:52:52.768663 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs podName:29305ecd-7a38-4ed0-b02e-b391e5487699 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:00.76863338 +0000 UTC m=+52.794622901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs") pod "network-metrics-daemon-l6hd4" (UID: "29305ecd-7a38-4ed0-b02e-b391e5487699") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.774696 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.774807 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.774827 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.774852 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.774872 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.795309 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:09:25.749139648 +0000 UTC Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.877720 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.877782 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.877798 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.877858 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.877878 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.981928 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.982000 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.982018 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.982048 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:52 crc kubenswrapper[4750]: I0214 13:52:52.982068 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:52Z","lastTransitionTime":"2026-02-14T13:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.085447 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.085516 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.085537 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.085564 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.085583 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:53Z","lastTransitionTime":"2026-02-14T13:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.188983 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.189046 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.189061 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.189080 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.189093 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:53Z","lastTransitionTime":"2026-02-14T13:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.292040 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.292097 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.292129 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.292155 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.292172 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:53Z","lastTransitionTime":"2026-02-14T13:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.395064 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.395135 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.395148 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.395169 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.395181 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:53Z","lastTransitionTime":"2026-02-14T13:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.498780 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.498821 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.498830 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.498847 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.498859 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:53Z","lastTransitionTime":"2026-02-14T13:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.602971 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.603047 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.603083 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.603144 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.603171 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:53Z","lastTransitionTime":"2026-02-14T13:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.706781 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.706827 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.706840 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.706861 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.706872 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:53Z","lastTransitionTime":"2026-02-14T13:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.741796 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.741892 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.741959 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:53 crc kubenswrapper[4750]: E0214 13:52:53.742023 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:53 crc kubenswrapper[4750]: E0214 13:52:53.742206 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:53 crc kubenswrapper[4750]: E0214 13:52:53.742366 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.796486 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 19:37:21.855745613 +0000 UTC Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.810175 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.810238 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.810295 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.810325 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.810343 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:53Z","lastTransitionTime":"2026-02-14T13:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.913630 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.913728 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.913753 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.913833 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:53 crc kubenswrapper[4750]: I0214 13:52:53.913852 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:53Z","lastTransitionTime":"2026-02-14T13:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.017571 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.017623 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.017639 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.017667 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.017687 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.121430 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.121523 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.121542 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.121572 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.121591 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.225200 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.225272 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.225289 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.225319 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.225342 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.329263 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.329339 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.329362 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.329403 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.329428 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.432179 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.432230 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.432241 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.432260 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.432270 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.536047 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.536172 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.536198 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.536229 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.536253 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.640377 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.640443 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.640462 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.640489 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.640509 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.741054 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:54 crc kubenswrapper[4750]: E0214 13:52:54.741366 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.743812 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.743883 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.743907 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.744160 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.744194 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.797515 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:28:45.530481952 +0000 UTC Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.847447 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.847522 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.847540 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.847568 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.847592 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.951273 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.951356 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.951373 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.951402 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:54 crc kubenswrapper[4750]: I0214 13:52:54.951420 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:54Z","lastTransitionTime":"2026-02-14T13:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.055066 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.055153 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.055168 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.055194 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.055211 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:55Z","lastTransitionTime":"2026-02-14T13:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.159603 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.159685 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.159707 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.159735 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.159753 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:55Z","lastTransitionTime":"2026-02-14T13:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.263667 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.263753 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.263781 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.263816 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.263842 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:55Z","lastTransitionTime":"2026-02-14T13:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.369419 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.369504 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.369529 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.369562 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.369590 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:55Z","lastTransitionTime":"2026-02-14T13:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.473296 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.473361 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.473377 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.473398 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.473409 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:55Z","lastTransitionTime":"2026-02-14T13:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.594951 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.595024 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.595037 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.595058 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.595070 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:55Z","lastTransitionTime":"2026-02-14T13:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.698484 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.698549 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.698566 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.698591 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.698612 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:55Z","lastTransitionTime":"2026-02-14T13:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.740879 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.740972 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.741013 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:55 crc kubenswrapper[4750]: E0214 13:52:55.741152 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:55 crc kubenswrapper[4750]: E0214 13:52:55.741271 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:55 crc kubenswrapper[4750]: E0214 13:52:55.741439 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.798482 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:55:17.251699634 +0000 UTC Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.801432 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.801502 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.801528 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.801559 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.801585 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:55Z","lastTransitionTime":"2026-02-14T13:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.904637 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.904711 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.904728 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.904753 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:55 crc kubenswrapper[4750]: I0214 13:52:55.904771 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:55Z","lastTransitionTime":"2026-02-14T13:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.008608 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.008680 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.008707 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.008731 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.008749 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.111898 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.111956 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.111976 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.112000 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.112018 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.216183 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.216253 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.216283 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.216312 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.216331 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.319072 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.319208 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.319239 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.319270 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.319293 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.414881 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.414942 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.414961 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.414987 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.415011 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: E0214 13:52:56.436834 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:56Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.442810 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.442920 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.442944 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.442965 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.442980 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: E0214 13:52:56.465251 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:56Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.471291 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.471348 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.471361 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.471379 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.471392 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: E0214 13:52:56.487605 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:56Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.493375 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.493431 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.493450 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.493474 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.493490 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: E0214 13:52:56.515800 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:56Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.522602 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.522700 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.522726 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.522758 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.522794 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: E0214 13:52:56.549549 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:56Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:56 crc kubenswrapper[4750]: E0214 13:52:56.549807 4750 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.552241 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.552287 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.552304 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.552328 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.552347 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.656495 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.656557 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.656570 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.656592 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.656607 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.741648 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:56 crc kubenswrapper[4750]: E0214 13:52:56.741875 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.759982 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.760067 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.760090 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.760689 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.761080 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.799329 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:14:15.409019936 +0000 UTC Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.864447 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.864508 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.864525 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.864550 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.864568 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.967317 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.967378 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.967402 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.967506 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:56 crc kubenswrapper[4750]: I0214 13:52:56.967531 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:56Z","lastTransitionTime":"2026-02-14T13:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.071700 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.071780 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.071794 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.071817 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.071830 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:57Z","lastTransitionTime":"2026-02-14T13:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.174912 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.174998 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.175023 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.175052 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.175074 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:57Z","lastTransitionTime":"2026-02-14T13:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.277939 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.278001 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.278017 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.278041 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.278057 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:57Z","lastTransitionTime":"2026-02-14T13:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.380727 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.380808 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.380834 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.380864 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.380891 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:57Z","lastTransitionTime":"2026-02-14T13:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.484984 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.485050 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.485072 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.485099 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.485148 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:57Z","lastTransitionTime":"2026-02-14T13:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.588731 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.588786 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.588803 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.588830 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.588844 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:57Z","lastTransitionTime":"2026-02-14T13:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.691474 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.691587 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.691618 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.691640 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.691655 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:57Z","lastTransitionTime":"2026-02-14T13:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.741526 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.741567 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.741636 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:57 crc kubenswrapper[4750]: E0214 13:52:57.741829 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:57 crc kubenswrapper[4750]: E0214 13:52:57.741977 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:57 crc kubenswrapper[4750]: E0214 13:52:57.742190 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.794830 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.794870 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.794883 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.794899 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.794910 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:57Z","lastTransitionTime":"2026-02-14T13:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.800174 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:49:52.915861623 +0000 UTC Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.897020 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.897062 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.897089 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.897104 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:57 crc kubenswrapper[4750]: I0214 13:52:57.897147 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:57Z","lastTransitionTime":"2026-02-14T13:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.000601 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.000673 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.000684 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.000702 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.000712 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.103357 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.103403 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.103415 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.103434 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.103447 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.206584 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.207104 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.207290 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.207446 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.207567 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.310543 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.310595 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.310605 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.310623 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.310644 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.413386 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.413419 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.413429 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.413442 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.413454 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.517257 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.517322 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.517338 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.517360 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.517377 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.620718 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.620799 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.620824 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.620859 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.620884 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.723758 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.723800 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.723820 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.723837 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.723846 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.741487 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:52:58 crc kubenswrapper[4750]: E0214 13:52:58.741982 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.766216 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.787485 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.800846 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:12:22.917495534 +0000 UTC Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.806800 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.824305 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.828088 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.828168 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.828194 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.828226 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.828250 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.848094 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.868734 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.899263 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.931520 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.931586 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.931598 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.931620 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.931635 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:58Z","lastTransitionTime":"2026-02-14T13:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.937687 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.957233 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:58 crc kubenswrapper[4750]: I0214 13:52:58.980060 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.001675 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:58Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.024100 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:59Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.035554 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.035876 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.035949 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.036258 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.036353 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.052560 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:59Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.066616 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:59Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.081632 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:59Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.100466 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:52:59Z is after 2025-08-24T17:21:41Z" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.140089 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.140193 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.140224 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.140261 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.140303 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.244092 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.244208 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.244252 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.244290 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.244315 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.347737 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.348224 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.348364 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.348526 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.348633 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.451861 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.451903 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.451915 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.451933 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.451944 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.555195 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.555250 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.555262 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.555280 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.555298 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.659035 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.659151 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.659249 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.659334 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.659370 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.741497 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.741541 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.741596 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:52:59 crc kubenswrapper[4750]: E0214 13:52:59.741677 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:52:59 crc kubenswrapper[4750]: E0214 13:52:59.741763 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:52:59 crc kubenswrapper[4750]: E0214 13:52:59.741938 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.761948 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.762494 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.762745 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.762906 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.763041 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.801409 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:05:25.498742758 +0000 UTC Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.865991 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.866098 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.866160 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.866190 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.866210 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.969966 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.970083 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.970107 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.970176 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:52:59 crc kubenswrapper[4750]: I0214 13:52:59.970203 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:52:59Z","lastTransitionTime":"2026-02-14T13:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.073679 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.073745 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.073758 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.073778 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.073792 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:00Z","lastTransitionTime":"2026-02-14T13:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.177616 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.177687 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.177701 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.177723 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.177736 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:00Z","lastTransitionTime":"2026-02-14T13:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.184328 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.198238 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.204716 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.218906 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.243041 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.262173 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.281581 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.281655 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.281675 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.281704 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.281726 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:00Z","lastTransitionTime":"2026-02-14T13:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.285013 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.298145 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.311411 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.329370 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.358837 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.378320 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.384590 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.384634 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.384647 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.384666 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.384680 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:00Z","lastTransitionTime":"2026-02-14T13:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.400625 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.414325 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.432221 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.449467 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.469691 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.487619 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.487668 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.487678 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.487695 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.487706 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:00Z","lastTransitionTime":"2026-02-14T13:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.489581 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:00Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.591237 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.591311 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.591332 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.591361 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.591380 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:00Z","lastTransitionTime":"2026-02-14T13:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.694508 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.694566 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.694586 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.694613 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.694632 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:00Z","lastTransitionTime":"2026-02-14T13:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.741282 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:00 crc kubenswrapper[4750]: E0214 13:53:00.742067 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.743027 4750 scope.go:117] "RemoveContainer" containerID="f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.798278 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.798334 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.798355 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.798387 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.798407 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:00Z","lastTransitionTime":"2026-02-14T13:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.802331 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:43:50.703730383 +0000 UTC Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.866064 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:00 crc kubenswrapper[4750]: E0214 13:53:00.866358 4750 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:53:00 crc kubenswrapper[4750]: E0214 13:53:00.866684 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs podName:29305ecd-7a38-4ed0-b02e-b391e5487699 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:16.866649391 +0000 UTC m=+68.892639112 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs") pod "network-metrics-daemon-l6hd4" (UID: "29305ecd-7a38-4ed0-b02e-b391e5487699") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.902243 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.902315 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.902330 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.902348 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:00 crc kubenswrapper[4750]: I0214 13:53:00.902362 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:00Z","lastTransitionTime":"2026-02-14T13:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.005680 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.005726 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.005735 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.005752 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.005763 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.108838 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.108888 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.108903 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.108925 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.108938 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.191601 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/1.log" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.194533 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.195145 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.211989 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.212036 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.212049 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.212067 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.212079 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.220470 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.237216 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.251769 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.265663 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.280736 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.293013 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.306025 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.315213 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.315266 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.315283 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.315306 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.315319 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.319444 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.330686 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.345954 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.360573 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.378958 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.391387 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.407863 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.422169 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.422246 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.422259 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.422279 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.422291 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.426766 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.446615 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.458990 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:01Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.524596 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.524632 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.524643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.524661 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.524673 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.627127 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.627175 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.627188 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.627206 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.627223 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.676162 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.676258 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.676298 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.676395 4750 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.676438 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:33.676425839 +0000 UTC m=+85.702415320 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.676488 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:53:33.676482561 +0000 UTC m=+85.702472042 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.676549 4750 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.676568 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:33.676562913 +0000 UTC m=+85.702552394 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.730101 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.730412 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.730486 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.730550 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.730606 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.742293 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.742628 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.742292 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.743145 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.743284 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.743424 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.776925 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.777151 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.777186 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.777206 4750 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.777279 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:33.777257287 +0000 UTC m=+85.803246768 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.803102 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:47:56.756328962 +0000 UTC Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.833860 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.834167 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.834230 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.834290 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.834343 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.878383 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.878658 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.878683 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.878703 4750 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:53:01 crc kubenswrapper[4750]: E0214 13:53:01.878771 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:33.878749393 +0000 UTC m=+85.904738904 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.938209 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.938280 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.938300 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.938325 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:01 crc kubenswrapper[4750]: I0214 13:53:01.938343 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:01Z","lastTransitionTime":"2026-02-14T13:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.041587 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.041728 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.041810 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.041900 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.041961 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.144995 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.145034 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.145047 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.145065 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.145080 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.201214 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/2.log" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.202478 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/1.log" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.205901 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062" exitCode=1 Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.206054 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.206216 4750 scope.go:117] "RemoveContainer" containerID="f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.207011 4750 scope.go:117] "RemoveContainer" containerID="ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062" Feb 14 13:53:02 crc kubenswrapper[4750]: E0214 13:53:02.207366 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.226646 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.248063 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.248171 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.248196 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.248229 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.248254 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.253838 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.278009 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.303984 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.323776 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.344152 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.351238 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.351301 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.351314 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.351335 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.351349 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.360189 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.375559 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.396873 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38c70c7289965936cc7b03df21a4cb1468f9eb1e93dfe2ce2ad92ad8766bc22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"message\\\":\\\"Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"cab7c637-a021-4a4d-a4b9-06d63c44316f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0214 13:52:42.104665 6171 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:01Z\\\",\\\"message\\\":\\\"6 6382 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631252 6382 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631276 6382 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:53:01.631324 6382 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:53:01.631338 6382 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:01.631343 6382 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:53:01.631372 6382 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:01.631376 6382 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:53:01.631386 6382 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:53:01.631407 6382 factory.go:656] Stopping watch factory\\\\nI0214 13:53:01.631393 6382 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:53:01.631417 6382 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:53:01.631826 6382 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:01.631436 6382 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:53:01.631857 6382 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.408850 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.419583 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.431545 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.441726 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.453935 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.453984 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.453995 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.454013 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.454025 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.458612 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.473758 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.488051 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.500893 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:02Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.556746 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.556799 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.556809 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.556828 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.556838 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.658643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.658679 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.658687 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.658701 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.658710 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.741046 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:02 crc kubenswrapper[4750]: E0214 13:53:02.741258 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.762355 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.762389 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.762398 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.762413 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.762424 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.803657 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:25:02.876943854 +0000 UTC Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.866336 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.866382 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.866398 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.866416 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.866428 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.969451 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.969488 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.969500 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.969517 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:02 crc kubenswrapper[4750]: I0214 13:53:02.969528 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:02Z","lastTransitionTime":"2026-02-14T13:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.073034 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.073086 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.073103 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.073157 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.073179 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:03Z","lastTransitionTime":"2026-02-14T13:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.176789 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.176861 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.176881 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.176905 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.176922 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:03Z","lastTransitionTime":"2026-02-14T13:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.212825 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/2.log" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.217919 4750 scope.go:117] "RemoveContainer" containerID="ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062" Feb 14 13:53:03 crc kubenswrapper[4750]: E0214 13:53:03.218291 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.234966 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.256479 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.276304 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.280392 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.280472 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.280496 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.280530 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.280562 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:03Z","lastTransitionTime":"2026-02-14T13:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.301192 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.318763 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.345538 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.380146 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:01Z\\\",\\\"message\\\":\\\"6 6382 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631252 6382 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631276 6382 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:53:01.631324 6382 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:53:01.631338 6382 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:01.631343 6382 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:53:01.631372 6382 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:01.631376 6382 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:53:01.631386 6382 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:53:01.631407 6382 factory.go:656] Stopping watch factory\\\\nI0214 13:53:01.631393 6382 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:53:01.631417 6382 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:53:01.631826 6382 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:01.631436 6382 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:53:01.631857 6382 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.383826 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.383894 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.383915 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.383945 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.383969 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:03Z","lastTransitionTime":"2026-02-14T13:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.395511 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.411402 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.430857 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.444769 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.458693 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.473274 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.486698 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.486786 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.486803 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.486832 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.486852 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:03Z","lastTransitionTime":"2026-02-14T13:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.488067 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.507962 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.525056 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.541751 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:03Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.590490 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.590538 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.590550 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.590567 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.590581 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:03Z","lastTransitionTime":"2026-02-14T13:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.694491 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.694563 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.694585 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.694614 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.694635 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:03Z","lastTransitionTime":"2026-02-14T13:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.741329 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:03 crc kubenswrapper[4750]: E0214 13:53:03.741494 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.741891 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.741942 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:03 crc kubenswrapper[4750]: E0214 13:53:03.743006 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:03 crc kubenswrapper[4750]: E0214 13:53:03.743595 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.798328 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.798381 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.798395 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.798414 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.798429 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:03Z","lastTransitionTime":"2026-02-14T13:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.804793 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:00:38.35268019 +0000 UTC Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.901535 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.901621 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.901643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.901669 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:03 crc kubenswrapper[4750]: I0214 13:53:03.901689 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:03Z","lastTransitionTime":"2026-02-14T13:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.005176 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.005239 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.005256 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.005281 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.005299 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.108682 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.108756 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.108791 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.108826 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.108854 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.212321 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.212396 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.212419 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.212449 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.212470 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.315138 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.315195 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.315263 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.315285 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.315297 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.418640 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.418695 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.418705 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.418723 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.418735 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.521794 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.521851 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.521866 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.521889 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.521904 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.625465 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.625549 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.625571 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.625603 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.625626 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.728824 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.728890 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.728913 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.728944 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.728972 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.748541 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:04 crc kubenswrapper[4750]: E0214 13:53:04.748839 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.805528 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:04:46.570906411 +0000 UTC Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.832441 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.832497 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.832516 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.832540 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.832559 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.935663 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.935718 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.935736 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.935761 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:04 crc kubenswrapper[4750]: I0214 13:53:04.935779 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:04Z","lastTransitionTime":"2026-02-14T13:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.038906 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.038968 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.038997 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.039025 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.039047 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.142951 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.143039 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.143059 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.143094 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.143148 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.245351 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.245773 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.245919 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.246059 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.246247 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.350092 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.350521 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.350718 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.350891 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.351065 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.455050 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.455185 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.455213 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.455243 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.455265 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.557985 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.558072 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.558090 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.558189 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.558250 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.662356 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.662440 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.662465 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.662495 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.662521 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.740966 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:05 crc kubenswrapper[4750]: E0214 13:53:05.741202 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.741569 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.741655 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:05 crc kubenswrapper[4750]: E0214 13:53:05.742072 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:05 crc kubenswrapper[4750]: E0214 13:53:05.742215 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.765573 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.766047 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.766310 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.766533 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.766740 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.806076 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:34:19.964609621 +0000 UTC Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.869969 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.870044 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.870076 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.870146 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.870180 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.974171 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.974577 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.974763 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.975045 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:05 crc kubenswrapper[4750]: I0214 13:53:05.975311 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:05Z","lastTransitionTime":"2026-02-14T13:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.078628 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.078950 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.079200 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.079401 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.079546 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.183093 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.183218 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.183237 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.183264 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.183283 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.286486 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.286545 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.286563 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.286589 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.286606 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.390594 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.390668 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.390678 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.390701 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.390713 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.493478 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.493564 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.493588 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.493612 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.493630 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.601683 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.604416 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.604449 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.604511 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.604533 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.665414 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.665467 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.665485 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.665511 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.665533 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: E0214 13:53:06.688087 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:06Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.693643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.693719 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.693740 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.693766 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.693784 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: E0214 13:53:06.717510 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:06Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.722924 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.722990 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.723009 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.723038 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.723062 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.741496 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:06 crc kubenswrapper[4750]: E0214 13:53:06.741731 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:06 crc kubenswrapper[4750]: E0214 13:53:06.743018 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:06Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.748879 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.748985 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.749006 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.749072 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.749149 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: E0214 13:53:06.769647 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:06Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.775039 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.775154 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.775181 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.775201 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.775217 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: E0214 13:53:06.792088 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:06Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:06 crc kubenswrapper[4750]: E0214 13:53:06.792281 4750 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.794567 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.794611 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.794621 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.794638 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.794652 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.808040 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:35:08.061282429 +0000 UTC Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.897513 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.897583 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.897609 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.897644 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:06 crc kubenswrapper[4750]: I0214 13:53:06.897665 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:06Z","lastTransitionTime":"2026-02-14T13:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.001281 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.001358 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.001379 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.001402 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.001419 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.105445 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.105510 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.105533 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.105561 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.105580 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.208791 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.208854 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.208872 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.208897 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.208914 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.312225 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.312274 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.312290 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.312313 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.312332 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.415353 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.415426 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.415443 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.415473 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.415493 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.518416 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.518474 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.518487 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.518506 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.518518 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.621600 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.621637 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.621645 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.621664 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.621675 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.724720 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.724752 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.724761 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.724775 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.724783 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.741013 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.741061 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:07 crc kubenswrapper[4750]: E0214 13:53:07.741258 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.741280 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:07 crc kubenswrapper[4750]: E0214 13:53:07.741365 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:07 crc kubenswrapper[4750]: E0214 13:53:07.741458 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.808387 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:03:43.201353039 +0000 UTC Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.827963 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.828049 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.828072 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.828103 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.828162 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.930808 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.930868 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.930888 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.930913 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:07 crc kubenswrapper[4750]: I0214 13:53:07.930933 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:07Z","lastTransitionTime":"2026-02-14T13:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.033859 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.033935 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.033957 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.033987 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.034007 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.136975 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.137047 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.137068 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.137099 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.137160 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.240032 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.240100 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.240144 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.240173 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.240190 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.343419 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.343471 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.343489 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.343513 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.343532 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.447541 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.447607 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.447624 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.447656 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.447675 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.550849 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.550936 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.550956 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.550989 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.551011 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.655092 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.655187 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.655206 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.655238 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.655257 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.741071 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:08 crc kubenswrapper[4750]: E0214 13:53:08.741334 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.757892 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.757944 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.757960 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.757982 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.758000 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.768262 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.791017 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.809507 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:27:52.443653958 +0000 UTC Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.809607 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.830777 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.846279 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.862540 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.862666 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.862688 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.862716 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.862738 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.871854 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.906509 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.931672 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.948392 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.965952 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.966011 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.966029 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.966052 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.966041 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.966069 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:08Z","lastTransitionTime":"2026-02-14T13:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:08 crc kubenswrapper[4750]: I0214 13:53:08.980843 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:08Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.004202 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:09Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.033969 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:01Z\\\",\\\"message\\\":\\\"6 6382 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631252 6382 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631276 6382 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:53:01.631324 6382 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:53:01.631338 6382 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:01.631343 6382 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:53:01.631372 6382 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:01.631376 6382 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:53:01.631386 6382 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:53:01.631407 6382 factory.go:656] Stopping watch factory\\\\nI0214 13:53:01.631393 6382 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:53:01.631417 6382 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:53:01.631826 6382 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:01.631436 6382 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:53:01.631857 6382 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:09Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.050069 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:09Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.071681 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.071927 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.072815 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.073044 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.073071 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:09Z","lastTransitionTime":"2026-02-14T13:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.074087 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:09Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.096223 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:09Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.113260 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:09Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.175914 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.175976 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.175996 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.176019 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.176036 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:09Z","lastTransitionTime":"2026-02-14T13:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.279330 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.279401 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.279421 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.279449 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.279470 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:09Z","lastTransitionTime":"2026-02-14T13:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.383461 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.383574 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.383593 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.383661 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.383680 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:09Z","lastTransitionTime":"2026-02-14T13:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.487086 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.487576 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.487847 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.488103 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.488369 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:09Z","lastTransitionTime":"2026-02-14T13:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.591543 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.591598 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.591615 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.591640 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.591658 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:09Z","lastTransitionTime":"2026-02-14T13:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.694352 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.694777 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.695001 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.695276 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.695474 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:09Z","lastTransitionTime":"2026-02-14T13:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.741396 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.741435 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.741462 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:09 crc kubenswrapper[4750]: E0214 13:53:09.742028 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:09 crc kubenswrapper[4750]: E0214 13:53:09.742306 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:09 crc kubenswrapper[4750]: E0214 13:53:09.742352 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.798764 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.799334 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.799669 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.799814 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.799941 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:09Z","lastTransitionTime":"2026-02-14T13:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.809783 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:51:13.363578253 +0000 UTC Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.903680 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.904105 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.904318 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.904540 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:09 crc kubenswrapper[4750]: I0214 13:53:09.904759 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:09Z","lastTransitionTime":"2026-02-14T13:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.007874 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.007957 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.007983 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.008016 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.008039 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.111229 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.111740 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.111886 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.112134 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.112316 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.214724 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.214778 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.214790 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.214812 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.214827 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.318734 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.318785 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.318798 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.318815 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.318839 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.422009 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.422153 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.422174 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.422199 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.422218 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.526630 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.526716 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.526736 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.526767 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.526789 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.630436 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.630514 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.630532 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.630562 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.630582 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.733790 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.733863 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.733887 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.734312 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.734336 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.741448 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:10 crc kubenswrapper[4750]: E0214 13:53:10.741738 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.810277 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:34:56.284739638 +0000 UTC Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.837933 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.837996 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.838014 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.838041 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.838059 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.941832 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.941905 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.941930 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.941985 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:10 crc kubenswrapper[4750]: I0214 13:53:10.942023 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:10Z","lastTransitionTime":"2026-02-14T13:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.045962 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.046034 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.046053 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.046078 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.046096 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.149619 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.149689 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.149708 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.149741 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.149769 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.253585 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.253642 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.253656 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.253678 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.253720 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.356946 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.357002 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.357020 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.357044 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.357064 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.459953 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.460043 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.460059 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.460078 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.460093 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.563207 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.563256 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.563268 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.563286 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.563301 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.665993 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.666036 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.666045 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.666061 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.666071 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.741460 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.741544 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.741625 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:11 crc kubenswrapper[4750]: E0214 13:53:11.741821 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:11 crc kubenswrapper[4750]: E0214 13:53:11.742071 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:11 crc kubenswrapper[4750]: E0214 13:53:11.742271 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.769162 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.769212 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.769230 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.769256 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.769276 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.810992 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:48:16.74825641 +0000 UTC Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.872085 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.872176 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.872196 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.872220 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.872237 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.975904 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.975957 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.975972 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.975993 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:11 crc kubenswrapper[4750]: I0214 13:53:11.976008 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:11Z","lastTransitionTime":"2026-02-14T13:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.079660 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.079758 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.079779 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.079807 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.079826 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:12Z","lastTransitionTime":"2026-02-14T13:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.182901 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.182979 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.182996 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.183037 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.183056 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:12Z","lastTransitionTime":"2026-02-14T13:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.287527 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.287579 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.287609 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.287627 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.287638 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:12Z","lastTransitionTime":"2026-02-14T13:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.391602 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.391675 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.391700 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.391726 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.391746 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:12Z","lastTransitionTime":"2026-02-14T13:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.494331 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.494380 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.494390 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.494407 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.494426 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:12Z","lastTransitionTime":"2026-02-14T13:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.597356 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.597425 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.597445 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.597474 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.597495 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:12Z","lastTransitionTime":"2026-02-14T13:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.700623 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.700691 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.700712 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.700741 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.700761 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:12Z","lastTransitionTime":"2026-02-14T13:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.741200 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:12 crc kubenswrapper[4750]: E0214 13:53:12.741523 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.805269 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.805334 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.805347 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.805389 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.805404 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:12Z","lastTransitionTime":"2026-02-14T13:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.811956 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:12:13.473561049 +0000 UTC Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.908282 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.908347 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.908367 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.908395 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:12 crc kubenswrapper[4750]: I0214 13:53:12.908414 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:12Z","lastTransitionTime":"2026-02-14T13:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.010959 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.010997 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.011009 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.011025 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.011040 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.116725 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.117205 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.117226 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.117248 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.117263 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.219505 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.219536 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.219544 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.219558 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.219568 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.322004 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.322088 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.322104 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.322142 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.322155 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.425469 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.425539 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.425562 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.425594 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.425616 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.527916 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.527967 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.527977 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.527998 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.528009 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.630646 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.630714 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.630733 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.630758 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.630772 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.734095 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.734181 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.734191 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.734209 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.734220 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.741501 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.741605 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.741696 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:13 crc kubenswrapper[4750]: E0214 13:53:13.741869 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:13 crc kubenswrapper[4750]: E0214 13:53:13.742030 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:13 crc kubenswrapper[4750]: E0214 13:53:13.742245 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.812950 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:45:08.031713046 +0000 UTC Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.839071 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.839175 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.839201 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.839237 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.839260 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.942027 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.942075 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.942086 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.942106 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:13 crc kubenswrapper[4750]: I0214 13:53:13.942159 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:13Z","lastTransitionTime":"2026-02-14T13:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.045225 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.045265 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.045275 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.045292 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.045303 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.147873 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.147943 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.147962 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.147988 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.148005 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.251789 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.251852 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.251874 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.251905 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.251923 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.353951 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.354024 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.354041 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.354068 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.354088 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.456934 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.456977 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.456986 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.457001 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.457011 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.560199 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.560253 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.560271 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.560297 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.560316 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.663011 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.663075 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.663092 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.663148 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.663174 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.741895 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:14 crc kubenswrapper[4750]: E0214 13:53:14.742070 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.765859 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.765915 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.765932 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.765951 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.765964 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.813217 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:11:27.960627706 +0000 UTC Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.869505 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.869645 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.869669 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.869690 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.869707 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.973282 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.973332 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.973364 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.973386 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:14 crc kubenswrapper[4750]: I0214 13:53:14.973403 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:14Z","lastTransitionTime":"2026-02-14T13:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.076280 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.076340 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.076358 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.076385 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.076405 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:15Z","lastTransitionTime":"2026-02-14T13:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.179100 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.179198 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.179216 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.179241 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.179267 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:15Z","lastTransitionTime":"2026-02-14T13:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.281986 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.282045 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.282093 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.282139 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.282156 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:15Z","lastTransitionTime":"2026-02-14T13:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.384357 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.384416 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.384433 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.384456 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.384478 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:15Z","lastTransitionTime":"2026-02-14T13:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.486958 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.487004 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.487016 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.487035 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.487046 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:15Z","lastTransitionTime":"2026-02-14T13:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.590022 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.590094 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.590153 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.590187 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.590210 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:15Z","lastTransitionTime":"2026-02-14T13:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.693410 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.693457 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.693467 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.693488 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.693498 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:15Z","lastTransitionTime":"2026-02-14T13:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.741094 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.741107 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:15 crc kubenswrapper[4750]: E0214 13:53:15.741238 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.741120 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:15 crc kubenswrapper[4750]: E0214 13:53:15.741339 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:15 crc kubenswrapper[4750]: E0214 13:53:15.741363 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.796385 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.796431 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.796733 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.796761 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.796776 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:15Z","lastTransitionTime":"2026-02-14T13:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.813508 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:39:31.875644398 +0000 UTC Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.899660 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.899717 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.899726 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.899744 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:15 crc kubenswrapper[4750]: I0214 13:53:15.899756 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:15Z","lastTransitionTime":"2026-02-14T13:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.055538 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.055601 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.055620 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.055643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.055660 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.158105 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.158176 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.158191 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.158213 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.158231 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.260367 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.260391 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.260399 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.260413 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.260421 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.364002 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.364044 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.364056 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.364072 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.364085 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.467427 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.467482 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.467500 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.467524 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.467541 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.570423 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.570482 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.570491 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.570506 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.570516 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.673336 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.673395 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.673408 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.673426 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.673441 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.740925 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:16 crc kubenswrapper[4750]: E0214 13:53:16.741082 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.776762 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.776846 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.776867 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.776893 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.776914 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.814052 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:53:48.630868001 +0000 UTC Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.880293 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.880466 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.880525 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.880673 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.880762 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.957105 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:16 crc kubenswrapper[4750]: E0214 13:53:16.957364 4750 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:53:16 crc kubenswrapper[4750]: E0214 13:53:16.957490 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs podName:29305ecd-7a38-4ed0-b02e-b391e5487699 nodeName:}" failed. No retries permitted until 2026-02-14 13:53:48.957463166 +0000 UTC m=+100.983452847 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs") pod "network-metrics-daemon-l6hd4" (UID: "29305ecd-7a38-4ed0-b02e-b391e5487699") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.984553 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.984593 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.984605 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.984626 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:16 crc kubenswrapper[4750]: I0214 13:53:16.984639 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:16Z","lastTransitionTime":"2026-02-14T13:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.046471 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.046525 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.046536 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.046555 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.046573 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: E0214 13:53:17.066080 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:17Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.070972 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.071087 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.071171 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.071204 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.071225 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: E0214 13:53:17.085779 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:17Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.091392 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.091461 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.091475 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.091498 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.091512 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: E0214 13:53:17.110221 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:17Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.115156 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.115201 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.115212 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.115228 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.115240 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: E0214 13:53:17.134513 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:17Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.139004 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.139034 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.139043 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.139057 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.139068 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: E0214 13:53:17.154451 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:17Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:17 crc kubenswrapper[4750]: E0214 13:53:17.154682 4750 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.156783 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.156862 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.156887 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.156921 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.156948 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.259229 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.259288 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.259305 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.259329 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.259346 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.362531 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.362582 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.362596 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.362622 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.362635 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.465245 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.465294 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.465311 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.465332 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.465347 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.574819 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.574878 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.574890 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.574910 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.574924 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.678066 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.678142 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.678155 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.678174 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.678188 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.741545 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.741628 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:17 crc kubenswrapper[4750]: E0214 13:53:17.741671 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.741646 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:17 crc kubenswrapper[4750]: E0214 13:53:17.741819 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:17 crc kubenswrapper[4750]: E0214 13:53:17.741961 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.780908 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.780953 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.780963 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.780980 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.780993 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.814705 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:53:42.329918801 +0000 UTC Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.884104 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.884172 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.884185 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.884203 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.884216 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.988294 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.988368 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.988388 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.988413 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:17 crc kubenswrapper[4750]: I0214 13:53:17.988432 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:17Z","lastTransitionTime":"2026-02-14T13:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.091811 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.091857 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.091869 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.091889 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.091899 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:18Z","lastTransitionTime":"2026-02-14T13:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.194229 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.194289 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.194301 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.194317 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.194328 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:18Z","lastTransitionTime":"2026-02-14T13:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.273662 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/0.log" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.274157 4750 generic.go:334] "Generic (PLEG): container finished" podID="7475461f-e0e5-4d5e-91fd-bfe8fb575146" containerID="08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202" exitCode=1 Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.274215 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n59sl" event={"ID":"7475461f-e0e5-4d5e-91fd-bfe8fb575146","Type":"ContainerDied","Data":"08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.274821 4750 scope.go:117] "RemoveContainer" containerID="08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.291746 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.296942 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.296973 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.296984 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.297000 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.297013 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:18Z","lastTransitionTime":"2026-02-14T13:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.309151 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.327913 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.350565 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:01Z\\\",\\\"message\\\":\\\"6 6382 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631252 6382 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631276 6382 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:53:01.631324 6382 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:53:01.631338 6382 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:01.631343 6382 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:53:01.631372 6382 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:01.631376 6382 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:53:01.631386 6382 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:53:01.631407 6382 factory.go:656] Stopping watch factory\\\\nI0214 13:53:01.631393 6382 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:53:01.631417 6382 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:53:01.631826 6382 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:01.631436 6382 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:53:01.631857 6382 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.366750 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.382784 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.396737 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.399582 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.399618 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.399630 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.399650 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.399662 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:18Z","lastTransitionTime":"2026-02-14T13:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.415567 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.428579 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.445462 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.460610 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"2026-02-14T13:52:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f\\\\n2026-02-14T13:52:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f to /host/opt/cni/bin/\\\\n2026-02-14T13:52:33Z [verbose] multus-daemon started\\\\n2026-02-14T13:52:33Z [verbose] Readiness Indicator file check\\\\n2026-02-14T13:53:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.474149 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.485623 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.498799 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.502806 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.502879 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.502920 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.502949 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.502964 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:18Z","lastTransitionTime":"2026-02-14T13:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.512669 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.527667 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.541967 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.605498 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.605530 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.605538 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.605553 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.605563 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:18Z","lastTransitionTime":"2026-02-14T13:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.708848 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.708926 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.708945 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.708970 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.708981 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:18Z","lastTransitionTime":"2026-02-14T13:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.741747 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:18 crc kubenswrapper[4750]: E0214 13:53:18.742681 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.743471 4750 scope.go:117] "RemoveContainer" containerID="ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062" Feb 14 13:53:18 crc kubenswrapper[4750]: E0214 13:53:18.744056 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.755903 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.773259 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.804811 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:01Z\\\",\\\"message\\\":\\\"6 6382 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631252 6382 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631276 6382 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:53:01.631324 6382 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:53:01.631338 6382 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:01.631343 6382 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:53:01.631372 6382 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:01.631376 6382 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:53:01.631386 6382 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:53:01.631407 6382 factory.go:656] Stopping watch factory\\\\nI0214 13:53:01.631393 6382 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:53:01.631417 6382 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:53:01.631826 6382 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:01.631436 6382 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:53:01.631857 6382 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.810684 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.810705 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.810713 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.810726 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.810736 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:18Z","lastTransitionTime":"2026-02-14T13:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.815299 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:34:06.499479931 +0000 UTC Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.817002 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.835185 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.854846 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.868879 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.888220 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.904766 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.916589 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.916655 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.916668 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.916693 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.916710 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:18Z","lastTransitionTime":"2026-02-14T13:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.926837 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.941397 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"2026-02-14T13:52:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f\\\\n2026-02-14T13:52:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f to /host/opt/cni/bin/\\\\n2026-02-14T13:52:33Z [verbose] multus-daemon started\\\\n2026-02-14T13:52:33Z [verbose] Readiness Indicator file check\\\\n2026-02-14T13:53:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.957084 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.968581 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.984868 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:18 crc kubenswrapper[4750]: I0214 13:53:18.996005 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:18Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.009961 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.019436 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.020318 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.020360 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.020375 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.020399 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.020414 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.123071 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.123137 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.123152 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.123168 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.123179 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.225531 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.225577 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.225588 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.225605 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.225617 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.281920 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/0.log" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.282047 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n59sl" event={"ID":"7475461f-e0e5-4d5e-91fd-bfe8fb575146","Type":"ContainerStarted","Data":"ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.295991 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.308052 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.318768 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.328807 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.328870 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.328888 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.328916 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.328938 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.333634 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"2026-02-14T13:52:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f\\\\n2026-02-14T13:52:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f to /host/opt/cni/bin/\\\\n2026-02-14T13:52:33Z [verbose] multus-daemon started\\\\n2026-02-14T13:52:33Z [verbose] Readiness Indicator file check\\\\n2026-02-14T13:53:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.347833 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.359431 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.375577 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.389171 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.406958 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.418720 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.431543 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.431561 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.431603 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.431618 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.431639 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.431653 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.449549 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.470653 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:01Z\\\",\\\"message\\\":\\\"6 6382 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631252 6382 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631276 6382 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:53:01.631324 6382 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:53:01.631338 6382 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:01.631343 6382 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:53:01.631372 6382 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:01.631376 6382 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:53:01.631386 6382 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:53:01.631407 6382 factory.go:656] Stopping watch factory\\\\nI0214 13:53:01.631393 6382 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:53:01.631417 6382 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:53:01.631826 6382 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:01.631436 6382 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:53:01.631857 6382 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.483033 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.498310 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.511700 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.525749 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:19Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.535460 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.535507 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.535519 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.535538 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.535550 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.638201 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.638323 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.638334 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.638350 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.638362 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.740857 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.740938 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.740872 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:19 crc kubenswrapper[4750]: E0214 13:53:19.741055 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:19 crc kubenswrapper[4750]: E0214 13:53:19.741321 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:19 crc kubenswrapper[4750]: E0214 13:53:19.741464 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.742571 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.742604 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.742620 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.742643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.742661 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.815593 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 03:28:02.977328437 +0000 UTC Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.845205 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.845265 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.845287 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.845315 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.845337 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.949303 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.949352 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.949368 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.949391 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:19 crc kubenswrapper[4750]: I0214 13:53:19.949408 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:19Z","lastTransitionTime":"2026-02-14T13:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.056270 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.056324 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.056341 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.056363 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.056380 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.158661 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.158734 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.158752 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.158778 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.158791 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.262056 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.262108 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.262153 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.262179 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.262196 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.365401 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.365447 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.365459 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.365477 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.365492 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.469058 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.469106 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.469138 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.469161 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.469176 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.572745 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.572797 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.572811 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.572832 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.572866 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.675884 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.675928 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.675938 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.675955 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.675964 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.741479 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:20 crc kubenswrapper[4750]: E0214 13:53:20.741646 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.777855 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.777910 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.777926 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.777947 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.777963 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.816001 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 03:46:39.536438259 +0000 UTC Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.880696 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.880752 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.880768 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.880787 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.880802 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.985198 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.985238 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.985249 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.985266 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:20 crc kubenswrapper[4750]: I0214 13:53:20.985278 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:20Z","lastTransitionTime":"2026-02-14T13:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.088372 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.088419 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.088435 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.088452 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.088463 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:21Z","lastTransitionTime":"2026-02-14T13:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.191761 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.191813 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.191824 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.191845 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.191859 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:21Z","lastTransitionTime":"2026-02-14T13:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.295840 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.295900 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.295909 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.295929 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.295939 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:21Z","lastTransitionTime":"2026-02-14T13:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.399159 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.399223 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.399241 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.399267 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.399285 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:21Z","lastTransitionTime":"2026-02-14T13:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.502161 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.502223 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.502237 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.502259 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.502272 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:21Z","lastTransitionTime":"2026-02-14T13:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.604947 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.605048 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.605060 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.605083 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.605095 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:21Z","lastTransitionTime":"2026-02-14T13:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.708555 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.708612 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.708622 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.708648 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.708664 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:21Z","lastTransitionTime":"2026-02-14T13:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.741573 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.741618 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.741588 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:21 crc kubenswrapper[4750]: E0214 13:53:21.741815 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:21 crc kubenswrapper[4750]: E0214 13:53:21.741980 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:21 crc kubenswrapper[4750]: E0214 13:53:21.742236 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.812164 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.812211 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.812220 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.812238 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.812248 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:21Z","lastTransitionTime":"2026-02-14T13:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.816649 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:03:19.467323272 +0000 UTC Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.915161 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.915212 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.915221 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.915240 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:21 crc kubenswrapper[4750]: I0214 13:53:21.915251 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:21Z","lastTransitionTime":"2026-02-14T13:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.018029 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.018099 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.018142 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.018169 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.018190 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.120580 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.120626 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.120637 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.120655 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.120678 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.223979 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.224018 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.224026 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.224045 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.224055 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.326977 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.327073 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.327094 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.327158 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.327183 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.430190 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.430243 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.430256 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.430278 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.430293 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.533205 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.533269 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.533281 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.533303 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.533319 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.636428 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.636493 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.636509 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.636531 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.636547 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.740075 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.740167 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.740188 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.740213 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.740230 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.740928 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:22 crc kubenswrapper[4750]: E0214 13:53:22.741100 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.817398 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:59:24.552604585 +0000 UTC Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.844516 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.844579 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.844597 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.844627 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.844645 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.948471 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.948525 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.948539 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.948558 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:22 crc kubenswrapper[4750]: I0214 13:53:22.948572 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:22Z","lastTransitionTime":"2026-02-14T13:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.480730 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:52:50.362958106 +0000 UTC Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.481188 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:24 crc kubenswrapper[4750]: E0214 13:53:24.481339 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.481727 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.482059 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.482385 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:24 crc kubenswrapper[4750]: E0214 13:53:24.482441 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.482480 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.482550 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.482575 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.482634 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:24Z","lastTransitionTime":"2026-02-14T13:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:24 crc kubenswrapper[4750]: E0214 13:53:24.482753 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.483767 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:24 crc kubenswrapper[4750]: E0214 13:53:24.483907 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.585720 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.585793 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.585815 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.585858 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.585878 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:24Z","lastTransitionTime":"2026-02-14T13:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.689418 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.689487 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.689505 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.689533 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.689553 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:24Z","lastTransitionTime":"2026-02-14T13:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.793695 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.793799 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.793824 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.793856 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.793880 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:24Z","lastTransitionTime":"2026-02-14T13:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.897334 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.897401 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.897420 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.897447 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:24 crc kubenswrapper[4750]: I0214 13:53:24.897467 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:24Z","lastTransitionTime":"2026-02-14T13:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.000477 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.000640 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.000669 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.000819 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.000852 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.103657 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.104837 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.104856 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.104879 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.104900 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.208042 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.208218 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.208247 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.208277 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.208300 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.310909 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.310974 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.310994 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.311020 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.311040 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.415257 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.415338 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.415360 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.415396 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.415417 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.481308 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:00:17.168556249 +0000 UTC Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.518465 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.518546 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.518572 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.518605 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.518629 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.622218 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.622285 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.622304 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.622333 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.622353 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.725036 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.725080 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.725090 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.725108 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.725156 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.741534 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.741569 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:25 crc kubenswrapper[4750]: E0214 13:53:25.741669 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:25 crc kubenswrapper[4750]: E0214 13:53:25.741767 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.828206 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.828294 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.828317 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.828347 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.828366 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.931431 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.931501 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.931524 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.931559 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:25 crc kubenswrapper[4750]: I0214 13:53:25.931580 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:25Z","lastTransitionTime":"2026-02-14T13:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.034890 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.034969 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.034992 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.035055 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.035078 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.145279 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.145343 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.145367 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.145396 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.145417 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.248996 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.249041 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.249058 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.249082 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.249099 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.352280 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.352847 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.352873 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.352897 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.352914 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.455808 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.456168 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.456306 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.456531 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.456664 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.482370 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:47:35.460283767 +0000 UTC Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.560408 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.562184 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.562227 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.562258 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.562275 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.665850 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.665920 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.665943 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.665974 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.665997 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.741642 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.741819 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:26 crc kubenswrapper[4750]: E0214 13:53:26.742336 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:26 crc kubenswrapper[4750]: E0214 13:53:26.742475 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.759182 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.768817 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.768854 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.768868 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.768891 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.768906 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.872862 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.872948 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.872980 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.873016 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.873043 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.976549 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.976618 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.976657 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.976688 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:26 crc kubenswrapper[4750]: I0214 13:53:26.976709 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:26Z","lastTransitionTime":"2026-02-14T13:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.080532 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.080623 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.080654 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.080687 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.080708 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.184216 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.184295 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.184316 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.184350 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.184372 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.288260 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.288367 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.288391 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.288420 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.288442 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.355254 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.355319 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.355336 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.355368 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.355386 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: E0214 13:53:27.370176 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:27Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.375602 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.375667 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.375696 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.375727 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.375751 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: E0214 13:53:27.388274 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:27Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.392995 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.393035 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.393049 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.393068 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.393081 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: E0214 13:53:27.412916 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:27Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.418367 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.418429 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.418446 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.418470 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.418493 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.483429 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:54:59.852905137 +0000 UTC Feb 14 13:53:27 crc kubenswrapper[4750]: E0214 13:53:27.486848 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:27Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.490736 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.490784 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.490816 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.490834 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.490844 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: E0214 13:53:27.502560 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:27Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:27 crc kubenswrapper[4750]: E0214 13:53:27.502720 4750 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.504879 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.504920 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.504933 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.504951 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.504963 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.608437 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.608488 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.608507 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.608536 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.608553 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.711762 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.711871 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.711886 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.711908 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.711922 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.741495 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.741547 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:27 crc kubenswrapper[4750]: E0214 13:53:27.741720 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:27 crc kubenswrapper[4750]: E0214 13:53:27.742019 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.815419 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.815482 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.815500 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.815525 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.815543 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.918756 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.918832 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.918855 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.918885 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:27 crc kubenswrapper[4750]: I0214 13:53:27.918907 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:27Z","lastTransitionTime":"2026-02-14T13:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.021048 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.021087 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.021103 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.021143 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.021156 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.124691 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.124758 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.124777 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.124804 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.124823 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.229066 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.229182 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.229205 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.229231 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.229249 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.332417 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.332468 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.332481 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.332501 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.332515 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.435634 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.435707 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.435728 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.435754 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.435773 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.484378 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:04:45.794434125 +0000 UTC Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.538715 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.538785 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.538798 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.538825 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.538842 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.642877 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.642955 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.642974 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.643002 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.643021 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.740839 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.740846 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:28 crc kubenswrapper[4750]: E0214 13:53:28.741043 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:28 crc kubenswrapper[4750]: E0214 13:53:28.741282 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.747262 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.747369 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.747395 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.747426 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.747448 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.765201 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.788687 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.808557 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.829939 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"2026-02-14T13:52:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f\\\\n2026-02-14T13:52:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f to /host/opt/cni/bin/\\\\n2026-02-14T13:52:33Z [verbose] multus-daemon started\\\\n2026-02-14T13:52:33Z [verbose] Readiness Indicator file check\\\\n2026-02-14T13:53:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.851095 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.851167 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.851184 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.851208 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.851226 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.853195 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.876382 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.897023 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.914296 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.929333 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.942895 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.954030 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.954088 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.954136 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.954165 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.954187 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:28Z","lastTransitionTime":"2026-02-14T13:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.956977 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e95f2324-9bf3-413c-bf13-9432c3d2f8b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95a39773bc8329c56bc3ccefefd82ed52ecc4c69704e09d04af661e4d77e4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.970836 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:28 crc kubenswrapper[4750]: I0214 13:53:28.992541 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:28Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.019547 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:01Z\\\",\\\"message\\\":\\\"6 6382 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631252 6382 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631276 6382 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:53:01.631324 6382 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:53:01.631338 6382 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:01.631343 6382 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:53:01.631372 6382 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:01.631376 6382 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:53:01.631386 6382 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:53:01.631407 6382 factory.go:656] Stopping watch factory\\\\nI0214 13:53:01.631393 6382 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:53:01.631417 6382 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:53:01.631826 6382 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:01.631436 6382 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:53:01.631857 6382 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:29Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.036792 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:29Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.057395 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.057475 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.057493 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.057519 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.057538 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.064679 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:29Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.085132 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:29Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.096406 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:29Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.160022 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.160102 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.160155 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.160185 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.160206 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.263142 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.263220 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.263240 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.263265 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.263285 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.366004 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.366050 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.366059 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.366080 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.366093 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.469613 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.469686 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.469709 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.469744 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.469770 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.485467 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:50:28.861284056 +0000 UTC Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.572649 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.572710 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.572723 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.572748 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.572762 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.675866 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.675905 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.675934 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.675955 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.675967 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.741925 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.741969 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:29 crc kubenswrapper[4750]: E0214 13:53:29.742173 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:29 crc kubenswrapper[4750]: E0214 13:53:29.742358 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.779559 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.779613 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.779637 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.779662 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.779680 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.883302 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.883354 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.883372 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.883397 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.883415 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.986890 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.986962 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.986979 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.987005 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:29 crc kubenswrapper[4750]: I0214 13:53:29.987025 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:29Z","lastTransitionTime":"2026-02-14T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.090276 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.090351 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.090411 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.090443 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.090461 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:30Z","lastTransitionTime":"2026-02-14T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.194195 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.194263 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.194282 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.194310 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.194328 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:30Z","lastTransitionTime":"2026-02-14T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.297952 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.298007 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.298018 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.298037 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.298050 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:30Z","lastTransitionTime":"2026-02-14T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.401468 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.401571 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.401624 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.401651 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.401705 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:30Z","lastTransitionTime":"2026-02-14T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.486619 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:16:42.660342456 +0000 UTC Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.504551 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.504628 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.504646 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.504676 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.504693 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:30Z","lastTransitionTime":"2026-02-14T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.608684 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.608763 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.608783 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.608810 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.608829 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:30Z","lastTransitionTime":"2026-02-14T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.712937 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.713032 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.713044 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.713066 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.713083 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:30Z","lastTransitionTime":"2026-02-14T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.741555 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.741539 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:30 crc kubenswrapper[4750]: E0214 13:53:30.741706 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:30 crc kubenswrapper[4750]: E0214 13:53:30.741800 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.816353 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.816393 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.816402 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.816419 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.816430 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:30Z","lastTransitionTime":"2026-02-14T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.920077 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.920187 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.920212 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.920241 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:30 crc kubenswrapper[4750]: I0214 13:53:30.920260 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:30Z","lastTransitionTime":"2026-02-14T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.023392 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.023483 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.023509 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.023535 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.023553 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.132237 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.132294 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.132321 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.132352 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.132372 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.237052 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.237162 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.237180 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.237200 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.237211 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.340482 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.340535 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.340546 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.340569 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.340581 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.443638 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.443698 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.443709 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.443731 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.443818 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.487319 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:31:03.191471514 +0000 UTC Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.547197 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.547241 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.547251 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.547267 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.547278 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.651368 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.651451 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.651471 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.651505 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.651604 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.741636 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.741694 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:31 crc kubenswrapper[4750]: E0214 13:53:31.742171 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:31 crc kubenswrapper[4750]: E0214 13:53:31.742334 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.754626 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.754687 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.754699 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.754721 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.754736 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.857592 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.857645 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.857655 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.857674 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.857689 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.961864 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.961944 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.961969 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.962003 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:31 crc kubenswrapper[4750]: I0214 13:53:31.962027 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:31Z","lastTransitionTime":"2026-02-14T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.065865 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.065938 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.065958 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.065993 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.066032 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.169728 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.169810 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.169838 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.169872 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.169902 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.273737 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.273800 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.273822 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.273844 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.273856 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.376734 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.376777 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.376789 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.376807 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.376819 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.480375 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.480439 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.480454 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.480481 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.480503 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.487830 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:23:42.166296557 +0000 UTC Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.584638 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.584702 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.584724 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.584755 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.584776 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.688457 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.688526 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.688550 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.688578 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.688599 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.741805 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.741979 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:32 crc kubenswrapper[4750]: E0214 13:53:32.742369 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:32 crc kubenswrapper[4750]: E0214 13:53:32.742236 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.791663 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.791702 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.791713 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.791730 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.791742 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.894526 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.894578 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.894593 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.894615 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.894634 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.997828 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.997881 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.997891 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.997908 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:32 crc kubenswrapper[4750]: I0214 13:53:32.997917 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:32Z","lastTransitionTime":"2026-02-14T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.100878 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.100938 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.100956 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.100983 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.101002 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:33Z","lastTransitionTime":"2026-02-14T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.204227 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.204294 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.204313 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.204342 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.204362 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:33Z","lastTransitionTime":"2026-02-14T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.307279 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.307372 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.307396 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.307435 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.307465 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:33Z","lastTransitionTime":"2026-02-14T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.410807 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.410877 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.410897 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.410925 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.410943 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:33Z","lastTransitionTime":"2026-02-14T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.488529 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:58:19.927077731 +0000 UTC Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.515455 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.515517 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.515535 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.515565 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.515584 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:33Z","lastTransitionTime":"2026-02-14T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.618626 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.618700 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.618727 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.618760 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.618783 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:33Z","lastTransitionTime":"2026-02-14T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.684084 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.684318 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.684349 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.684315329 +0000 UTC m=+149.710304840 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.684413 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.684594 4750 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.684633 4750 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.684667 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.684647268 +0000 UTC m=+149.710636779 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.684722 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.684697729 +0000 UTC m=+149.710687250 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.721957 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.722050 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.722071 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.722229 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.722263 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:33Z","lastTransitionTime":"2026-02-14T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.761425 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.761495 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.761604 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.763452 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.763735 4750 scope.go:117] "RemoveContainer" containerID="ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.785293 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.786762 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.786823 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.786852 4750 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.786949 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.786919209 +0000 UTC m=+149.812908800 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.826233 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.826323 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.826341 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.826367 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.826416 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:33Z","lastTransitionTime":"2026-02-14T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.888007 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.888342 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.888406 4750 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.888431 4750 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:53:33 crc kubenswrapper[4750]: E0214 13:53:33.888530 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.888501432 +0000 UTC m=+149.914490943 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.935468 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.935531 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.935545 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.935571 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:33 crc kubenswrapper[4750]: I0214 13:53:33.935587 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:33Z","lastTransitionTime":"2026-02-14T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.039247 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.039316 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.039338 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.039371 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.039396 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.142643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.142693 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.142705 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.142725 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.142736 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.245692 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.245743 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.245759 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.245803 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.245822 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.349790 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.350133 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.350146 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.350165 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.350178 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.454464 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.454508 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.454522 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.454540 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.454553 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.488680 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:08:28.843954543 +0000 UTC Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.531183 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/2.log" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.534289 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.534702 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.552618 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.557467 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.557509 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.557521 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.557542 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.557554 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.568768 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.584230 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.602354 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"2026-02-14T13:52:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f\\\\n2026-02-14T13:52:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f to /host/opt/cni/bin/\\\\n2026-02-14T13:52:33Z [verbose] multus-daemon started\\\\n2026-02-14T13:52:33Z [verbose] Readiness Indicator file check\\\\n2026-02-14T13:53:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.620220 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.638509 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.655694 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.661013 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.661059 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.661070 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.661093 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.661104 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.674581 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.688864 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.701609 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.714419 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e95f2324-9bf3-413c-bf13-9432c3d2f8b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95a39773bc8329c56bc3ccefefd82ed52ecc4c69704e09d04af661e4d77e4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.728264 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.741917 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.741956 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:34 crc kubenswrapper[4750]: E0214 13:53:34.742186 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:34 crc kubenswrapper[4750]: E0214 13:53:34.742249 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.745834 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.763682 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.763727 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.763740 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.763760 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.763773 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.766996 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:01Z\\\",\\\"message\\\":\\\"6 6382 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631252 6382 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631276 6382 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:53:01.631324 6382 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:53:01.631338 6382 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:01.631343 6382 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:53:01.631372 6382 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:01.631376 6382 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:53:01.631386 6382 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:53:01.631407 6382 factory.go:656] Stopping watch factory\\\\nI0214 13:53:01.631393 6382 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:53:01.631417 6382 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:53:01.631826 6382 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:01.631436 6382 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:53:01.631857 6382 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.783418 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.798829 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.813506 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.828436 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:34Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.866906 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.866967 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.866978 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.867001 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.867014 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.970621 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.970680 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.970700 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.970730 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:34 crc kubenswrapper[4750]: I0214 13:53:34.970750 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:34Z","lastTransitionTime":"2026-02-14T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.074183 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.074263 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.074287 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.074313 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.074330 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:35Z","lastTransitionTime":"2026-02-14T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.177846 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.177937 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.177966 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.178001 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.178025 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:35Z","lastTransitionTime":"2026-02-14T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.283426 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.283529 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.283561 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.283600 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.283641 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:35Z","lastTransitionTime":"2026-02-14T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.386898 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.386954 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.386974 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.386998 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.387019 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:35Z","lastTransitionTime":"2026-02-14T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.488793 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:02:10.883958805 +0000 UTC Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.490518 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.490586 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.490604 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.490632 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.490650 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:35Z","lastTransitionTime":"2026-02-14T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.540817 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/3.log" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.541849 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/2.log" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.545805 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71" exitCode=1 Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.545874 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.545945 4750 scope.go:117] "RemoveContainer" containerID="ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.546947 4750 scope.go:117] "RemoveContainer" containerID="69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71" Feb 14 13:53:35 crc kubenswrapper[4750]: E0214 13:53:35.547253 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.579279 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab737617b99ba644a8fe0242758ed580ecd2b8f922075ba4e31ebab5607e3062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:01Z\\\",\\\"message\\\":\\\"6 6382 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631252 6382 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:01.631276 6382 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 13:53:01.631324 6382 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 13:53:01.631338 6382 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:01.631343 6382 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 13:53:01.631372 6382 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:01.631376 6382 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 13:53:01.631386 6382 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 13:53:01.631407 6382 factory.go:656] Stopping watch factory\\\\nI0214 13:53:01.631393 6382 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 13:53:01.631417 6382 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 13:53:01.631826 6382 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:01.631436 6382 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 13:53:01.631857 6382 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:34Z\\\",\\\"message\\\":\\\"flector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.734166 6862 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.734553 6862 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.734791 6862 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.739009 6862 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:34.739066 6862 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:53:34.739178 6862 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:34.739297 6862 factory.go:656] Stopping watch factory\\\\nI0214 13:53:34.739313 6862 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:34.741792 6862 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0214 13:53:34.741820 6862 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0214 13:53:34.741905 6862 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:53:34.741950 6862 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0214 13:53:34.742515 6862 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.594553 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.594601 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.594619 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.594648 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.594673 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:35Z","lastTransitionTime":"2026-02-14T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.598643 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.627150 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e95f2324-9bf3-413c-bf13-9432c3d2f8b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95a39773bc8329c56bc3ccefefd82ed52ecc4c69704e09d04af661e4d77e4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.648006 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.675848 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.696564 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.698301 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.698365 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.698382 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.698413 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.698430 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:35Z","lastTransitionTime":"2026-02-14T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.716215 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.733155 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.741212 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.741225 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:35 crc kubenswrapper[4750]: E0214 13:53:35.741410 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:35 crc kubenswrapper[4750]: E0214 13:53:35.741471 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.751078 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"2026-02-14T13:52:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f\\\\n2026-02-14T13:52:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f to /host/opt/cni/bin/\\\\n2026-02-14T13:52:33Z [verbose] multus-daemon started\\\\n2026-02-14T13:52:33Z [verbose] Readiness Indicator file check\\\\n2026-02-14T13:53:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.773093 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.795097 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.801449 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.801518 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.801544 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.801576 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.801597 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:35Z","lastTransitionTime":"2026-02-14T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.816068 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.834478 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.854278 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.872270 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.895979 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.905383 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.905432 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.905453 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.905478 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.905501 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:35Z","lastTransitionTime":"2026-02-14T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.919432 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:35 crc kubenswrapper[4750]: I0214 13:53:35.938634 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:35Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.008980 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.009045 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.009065 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.009103 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.009149 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.113251 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.113319 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.113338 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.113363 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.113381 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.216386 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.216439 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.216460 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.216483 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.216505 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.319454 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.319514 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.319530 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.319586 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.319608 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.422770 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.422839 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.422860 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.422962 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.422987 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.489464 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:53:08.703396232 +0000 UTC Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.526014 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.526065 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.526080 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.526099 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.526136 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.553590 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/3.log" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.559688 4750 scope.go:117] "RemoveContainer" containerID="69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71" Feb 14 13:53:36 crc kubenswrapper[4750]: E0214 13:53:36.559972 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.582783 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.605684 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.628750 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.628810 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.628828 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.628855 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.628875 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.630687 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.653483 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.679201 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.699937 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.733788 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.734205 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.734246 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"2026-02-14T13:52:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f\\\\n2026-02-14T13:52:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f to /host/opt/cni/bin/\\\\n2026-02-14T13:52:33Z [verbose] multus-daemon started\\\\n2026-02-14T13:52:33Z [verbose] Readiness Indicator file check\\\\n2026-02-14T13:53:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.734446 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.734726 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.734748 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.741567 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:36 crc kubenswrapper[4750]: E0214 13:53:36.741720 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.741988 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:36 crc kubenswrapper[4750]: E0214 13:53:36.742302 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.757831 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.780677 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.811415 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.834595 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.837878 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.838101 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.838298 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.838429 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.838565 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.862608 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.895781 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.919193 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e95f2324-9bf3-413c-bf13-9432c3d2f8b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95a39773bc8329c56bc3ccefefd82ed52ecc4c69704e09d04af661e4d77e4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.941965 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.942464 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.942598 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.942729 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.942854 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:36Z","lastTransitionTime":"2026-02-14T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.948428 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.968174 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:36 crc kubenswrapper[4750]: I0214 13:53:36.989695 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:34Z\\\",\\\"message\\\":\\\"flector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.734166 6862 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.734553 6862 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.734791 6862 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.739009 6862 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:34.739066 6862 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:53:34.739178 6862 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:34.739297 6862 factory.go:656] Stopping watch factory\\\\nI0214 13:53:34.739313 6862 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:34.741792 6862 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0214 13:53:34.741820 6862 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0214 13:53:34.741905 6862 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:53:34.741950 6862 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0214 13:53:34.742515 6862 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:36Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.004947 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.046579 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.046639 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.046657 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.046680 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.046698 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.149470 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.149535 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.149557 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.149586 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.149608 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.252637 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.252712 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.252730 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.252755 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.252774 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.355942 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.355997 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.356006 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.356027 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.356037 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.459711 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.459791 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.459814 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.459844 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.459864 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.490347 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:50:06.580919706 +0000 UTC Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.562708 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.562768 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.562784 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.562807 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.562825 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.666352 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.666438 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.666466 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.666501 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.666525 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.741906 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.741910 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:37 crc kubenswrapper[4750]: E0214 13:53:37.742186 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:37 crc kubenswrapper[4750]: E0214 13:53:37.742304 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.769453 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.769518 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.769536 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.769562 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.769585 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.873270 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.873330 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.873349 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.873374 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.873393 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.881428 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.881477 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.881494 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.881515 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.881532 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: E0214 13:53:37.902915 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.908578 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.908634 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.908647 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.908665 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.908682 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: E0214 13:53:37.930128 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.934723 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.934753 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.934780 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.934798 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.934807 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: E0214 13:53:37.955895 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.961336 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.961375 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.961388 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.961403 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.961412 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:37 crc kubenswrapper[4750]: E0214 13:53:37.975989 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:37Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.984472 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.984516 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.984530 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.984548 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:37 crc kubenswrapper[4750]: I0214 13:53:37.984562 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:37Z","lastTransitionTime":"2026-02-14T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: E0214 13:53:38.008561 4750 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9eaedfc-b89c-47f4-85df-878c35f498b6\\\",\\\"systemUUID\\\":\\\"bbcac0cb-82e6-48a0-97c6-f89f2f92ed82\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: E0214 13:53:38.009213 4750 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.011334 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.011409 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.011430 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.011458 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.011480 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.115342 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.115413 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.115436 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.115469 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.115492 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.218822 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.218900 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.218923 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.218953 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.218977 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.321828 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.321894 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.321912 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.321938 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.321960 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.424643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.425073 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.426330 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.426744 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.426948 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.491561 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:02:41.199185003 +0000 UTC Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.530382 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.530437 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.530454 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.530510 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.530534 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.633542 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.633620 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.633644 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.633676 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.633702 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.736988 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.737071 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.737092 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.737154 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.737174 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.741441 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.741544 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:38 crc kubenswrapper[4750]: E0214 13:53:38.741601 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:38 crc kubenswrapper[4750]: E0214 13:53:38.741763 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.759976 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581740c6-1f28-4471-8131-5d5042cc59f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf334c6e937f9b73dfb6b9f1d95a09495d2a89f12e8d1b46f53a0c8ed33c58c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czxln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j5rld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.782232 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n59sl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7475461f-e0e5-4d5e-91fd-bfe8fb575146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:18Z\\\",\\\"message\\\":\\\"2026-02-14T13:52:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f\\\\n2026-02-14T13:52:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d18ea35d-15c3-4961-84c0-2814e75a413f to /host/opt/cni/bin/\\\\n2026-02-14T13:52:33Z [verbose] multus-daemon started\\\\n2026-02-14T13:52:33Z [verbose] Readiness Indicator file check\\\\n2026-02-14T13:53:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8pp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n59sl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.804799 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"048b1996-b32c-441e-a1ee-b2b60fba2887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a66321f0acac97df0067c95028a43bc34d1aba43ba798cb40559b0b1803bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91934768eec9eb110c42720c28fbcb12d9651e8c9c59e7cf4c8f4150b40d520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22553ac351489d379f1373a40671bf4574f185557967cea99cb5b8d963547466\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.826818 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76e22b89c1c36b5a8f7e8f59872cfce4b4f1cc4ebcec3da680d1b9d725a4157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3146708ec5d7ad84057ab564858da4a6e733a40eca74e4bc50afa594b9c4ff64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.840582 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.840640 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.840657 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.840700 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.840720 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.849815 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5cb1972b041583ae5dbe4fec2445183683fb4655cd696acd6837e86807b6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.874028 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11848800103f77619a6fb2999afabf47afc7a33891fd63578140f6b6260f79a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.895630 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.917256 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-78wgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9d2bd01-539c-4980-8ff6-46efd6a51f43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79a3395d26297f5c12312bb7e300adfdc697021c34745579ea12b4516a5b2af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqrj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-78wgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.941433 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0214 13:52:22.435675 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 13:52:22.436884 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3621194783/tls.crt::/tmp/serving-cert-3621194783/tls.key\\\\\\\"\\\\nI0214 13:52:29.280252 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 13:52:29.285915 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 13:52:29.285942 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 13:52:29.285968 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 13:52:29.285975 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 13:52:29.297384 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0214 13:52:29.297417 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 13:52:29.297432 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 13:52:29.297436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 13:52:29.297442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 13:52:29.297448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0214 13:52:29.297696 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0214 13:52:29.299810 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.943903 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.943956 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.943977 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.944002 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.944023 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:38Z","lastTransitionTime":"2026-02-14T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.960166 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dc71594-9296-4684-95c4-309213799805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa762b9a42890594b181d9c4517adf07c39527f0c9e20559a074ac1201b9b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ef8c7e543791bd5329e4cdeda0e289480e1f540502212036ccda42881b9f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8abffbb34c9d26eabf444ce1a95f5b7954c9bbc50e90f38bca64daa8269c25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e98c229703fcbdf77992033c0ef0821741abec743d29a2310e1f398e296de5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:38 crc kubenswrapper[4750]: I0214 13:53:38.982058 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2617686e-5f7f-40a4-9654-fee29bbd1d71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458bb75a9a23290e0343f5ca807a47bfa467815cfe2367f45175f7ede8d5f05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c273600e5ec09b70913a0c466cccb1b4eb934b5f07de21aec7d1143aed4bb43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487d24d76318202cd7b8b16ec6e11bd580b0bb9c20d985cd7ede7247ef189aee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3af58d55f8fa4d2d51680460fbf04dbdffbe0625abe2e44e95f944f6f8b23e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1bf9459dc47a30c53c0b9ad89f189ba93c9c2f00cf87a94ceb5af8638c1334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece2915255d224f6963d66701f4c2cabb700e958cc30debb6c6dbf34900e6866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a133151d9318af926da5b333c76c8763fd4807136dc432a70f5d130a804ac7c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdr75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jd2lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:38Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.013600 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06beb41c-7a86-45c1-85c2-c4f9543961ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T13:53:34Z\\\",\\\"message\\\":\\\"flector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.734166 6862 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.734553 6862 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.734791 6862 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 13:53:34.739009 6862 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 13:53:34.739066 6862 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 13:53:34.739178 6862 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 13:53:34.739297 6862 factory.go:656] Stopping watch factory\\\\nI0214 13:53:34.739313 6862 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 13:53:34.741792 6862 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0214 13:53:34.741820 6862 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0214 13:53:34.741905 6862 ovnkube.go:599] Stopped ovnkube\\\\nI0214 13:53:34.741950 6862 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0214 13:53:34.742515 6862 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T13:53:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97rqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4ct5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:39Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.027802 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"011928a7-1832-44dc-acf7-7b54adbd2108\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffbc516523f3c6847b9cf42e9a2a84a1298a4bb34a6c25161d8e98953313bb5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a774e487de02a3df49ab57a1351bc81d2f68455a04f3628e3c9561cd1a16e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bf5d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:39Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.039325 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e95f2324-9bf3-413c-bf13-9432c3d2f8b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95a39773bc8329c56bc3ccefefd82ed52ecc4c69704e09d04af661e4d77e4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://340e4f39302b3688ddb5ffde6c90715014243e119c07d3a3f329b9a870e81bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:39Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.048485 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.048534 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.048551 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.048577 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.048598 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.054305 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78xwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a555a0c-f608-450a-b6aa-28dedd5b5e34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2017b73ea6bb69f12dbfa08ef90d3a475f42ab88f50cc775459eff6c45dc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fk2w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78xwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:39Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.068103 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:39Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.081197 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29305ecd-7a38-4ed0-b02e-b391e5487699\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z2zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T13:52:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l6hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:39Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.109300 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:52:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T13:53:39Z is after 2025-08-24T17:21:41Z" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.151957 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.152251 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.152402 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.152558 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.152699 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.255998 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.256062 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.256079 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.256106 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.256163 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.359512 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.359607 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.359635 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.359671 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.359690 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.463865 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.463920 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.463938 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.463963 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.463981 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.492615 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:08:50.906812194 +0000 UTC Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.567205 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.567279 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.567311 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.567344 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.567361 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.670414 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.670467 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.670493 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.670523 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.670546 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.741766 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.741768 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:39 crc kubenswrapper[4750]: E0214 13:53:39.741982 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:39 crc kubenswrapper[4750]: E0214 13:53:39.742156 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.764573 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.773648 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.773716 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.773740 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.773769 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.773794 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.877151 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.877240 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.877264 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.877295 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.877317 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.980565 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.980639 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.980658 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.980686 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:39 crc kubenswrapper[4750]: I0214 13:53:39.980702 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:39Z","lastTransitionTime":"2026-02-14T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.084508 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.084591 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.084621 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.084659 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.084687 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:40Z","lastTransitionTime":"2026-02-14T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.187719 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.187787 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.187807 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.187834 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.187855 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:40Z","lastTransitionTime":"2026-02-14T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.291515 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.291583 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.291594 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.291615 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.291629 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:40Z","lastTransitionTime":"2026-02-14T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.395109 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.395210 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.395229 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.395254 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.395282 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:40Z","lastTransitionTime":"2026-02-14T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.494032 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:34:48.643711593 +0000 UTC Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.498602 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.498688 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.498710 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.498744 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.498763 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:40Z","lastTransitionTime":"2026-02-14T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.602412 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.602506 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.602531 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.602560 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.602579 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:40Z","lastTransitionTime":"2026-02-14T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.706930 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.707036 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.707061 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.707102 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.707162 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:40Z","lastTransitionTime":"2026-02-14T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.741833 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.742542 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:40 crc kubenswrapper[4750]: E0214 13:53:40.742683 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:40 crc kubenswrapper[4750]: E0214 13:53:40.742956 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.811738 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.811800 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.811819 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.811842 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.811862 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:40Z","lastTransitionTime":"2026-02-14T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.915866 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.915931 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.915952 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.915978 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:40 crc kubenswrapper[4750]: I0214 13:53:40.915997 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:40Z","lastTransitionTime":"2026-02-14T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.019274 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.019329 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.019342 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.019359 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.019372 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.123476 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.123574 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.123601 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.123630 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.123652 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.226759 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.226822 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.226836 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.226860 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.226875 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.330319 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.330372 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.330389 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.330411 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.330424 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.434208 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.434274 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.434298 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.434330 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.434357 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.494927 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:41:09.587641197 +0000 UTC Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.538546 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.538636 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.538659 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.538687 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.538706 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.642744 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.642814 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.642837 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.642864 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.642883 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.741824 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.741921 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:41 crc kubenswrapper[4750]: E0214 13:53:41.742053 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:41 crc kubenswrapper[4750]: E0214 13:53:41.742327 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.746184 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.746439 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.746528 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.746628 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.746732 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.849915 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.850442 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.850564 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.850685 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.850790 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.954679 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.955147 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.955268 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.955503 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:41 crc kubenswrapper[4750]: I0214 13:53:41.955668 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:41Z","lastTransitionTime":"2026-02-14T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.058943 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.059596 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.059754 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.059911 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.060079 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.163721 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.163788 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.163806 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.163834 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.163853 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.266926 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.266993 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.267019 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.267054 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.267080 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.370153 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.370230 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.370250 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.370274 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.370291 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.476107 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.476243 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.476264 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.476299 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.476327 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.495247 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:24:33.901750529 +0000 UTC Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.579860 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.579915 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.579930 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.579950 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.579968 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.684385 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.684452 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.684475 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.684500 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.684520 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.740854 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.740968 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:42 crc kubenswrapper[4750]: E0214 13:53:42.741056 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:42 crc kubenswrapper[4750]: E0214 13:53:42.741242 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.787827 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.787892 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.787908 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.787938 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.787957 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.891083 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.891196 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.891221 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.891252 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.891275 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.994809 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.994898 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.994928 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.994957 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:42 crc kubenswrapper[4750]: I0214 13:53:42.994978 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:42Z","lastTransitionTime":"2026-02-14T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.098203 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.098290 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.098313 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.098345 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.098370 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:43Z","lastTransitionTime":"2026-02-14T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.203124 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.203330 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.203361 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.203393 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.203419 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:43Z","lastTransitionTime":"2026-02-14T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.305554 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.305647 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.305660 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.305678 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.305692 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:43Z","lastTransitionTime":"2026-02-14T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.408615 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.408693 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.408711 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.408747 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.408768 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:43Z","lastTransitionTime":"2026-02-14T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.495586 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:11:51.913536314 +0000 UTC Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.511997 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.512063 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.512080 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.512106 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.512155 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:43Z","lastTransitionTime":"2026-02-14T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.614892 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.614973 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.614996 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.615025 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.615048 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:43Z","lastTransitionTime":"2026-02-14T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.723713 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.723809 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.723842 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.723887 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.723923 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:43Z","lastTransitionTime":"2026-02-14T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.741166 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.741192 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:43 crc kubenswrapper[4750]: E0214 13:53:43.741348 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:43 crc kubenswrapper[4750]: E0214 13:53:43.741484 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.827695 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.827755 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.827777 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.827807 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.827828 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:43Z","lastTransitionTime":"2026-02-14T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.931497 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.931542 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.931559 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.931583 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:43 crc kubenswrapper[4750]: I0214 13:53:43.931602 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:43Z","lastTransitionTime":"2026-02-14T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.034725 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.034783 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.034816 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.034844 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.034865 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.137791 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.137861 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.137878 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.137903 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.137919 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.241466 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.241533 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.241552 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.241578 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.241596 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.345215 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.345274 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.345291 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.345313 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.345327 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.448842 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.448900 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.448912 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.448933 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.448949 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.496230 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:25:03.507201464 +0000 UTC Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.551830 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.551879 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.551889 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.551922 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.551936 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.655325 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.655728 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.655746 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.655772 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.655791 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.741752 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.741762 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:44 crc kubenswrapper[4750]: E0214 13:53:44.741992 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:44 crc kubenswrapper[4750]: E0214 13:53:44.742243 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.759024 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.759060 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.759073 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.759090 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.759101 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.862081 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.862172 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.862187 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.862211 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.862229 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.966271 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.966341 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.966360 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.966388 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:44 crc kubenswrapper[4750]: I0214 13:53:44.966407 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:44Z","lastTransitionTime":"2026-02-14T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.069522 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.069614 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.069632 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.069659 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.069677 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:45Z","lastTransitionTime":"2026-02-14T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.174469 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.174534 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.174546 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.174570 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.174585 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:45Z","lastTransitionTime":"2026-02-14T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.278093 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.278197 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.278211 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.278237 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.278251 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:45Z","lastTransitionTime":"2026-02-14T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.382643 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.382750 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.382791 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.382818 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.382838 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:45Z","lastTransitionTime":"2026-02-14T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.486179 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.486254 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.486275 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.486308 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.486330 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:45Z","lastTransitionTime":"2026-02-14T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.496854 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:47:26.270122552 +0000 UTC Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.588816 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.588866 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.588886 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.588907 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.588920 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:45Z","lastTransitionTime":"2026-02-14T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.691828 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.691903 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.691913 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.691935 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.691949 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:45Z","lastTransitionTime":"2026-02-14T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.740971 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.741045 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:45 crc kubenswrapper[4750]: E0214 13:53:45.741137 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:45 crc kubenswrapper[4750]: E0214 13:53:45.741210 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.794894 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.794956 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.795025 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.795045 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.795062 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:45Z","lastTransitionTime":"2026-02-14T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.898296 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.898645 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.898666 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.898694 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:45 crc kubenswrapper[4750]: I0214 13:53:45.898734 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:45Z","lastTransitionTime":"2026-02-14T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.001533 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.001591 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.001608 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.001632 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.001651 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.104319 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.104388 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.104405 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.104433 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.104452 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.207131 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.207179 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.207188 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.207204 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.207214 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.309805 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.309875 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.309896 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.309924 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.309946 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.412290 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.412358 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.412380 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.412412 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.412433 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.498038 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:59:59.148154138 +0000 UTC Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.515464 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.515526 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.515553 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.515820 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.516098 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.623484 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.623591 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.623618 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.624212 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.625442 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.729912 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.729981 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.730004 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.730067 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.730088 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.741451 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.741458 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:46 crc kubenswrapper[4750]: E0214 13:53:46.741596 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:46 crc kubenswrapper[4750]: E0214 13:53:46.741946 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.833759 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.833830 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.833854 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.833883 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.833905 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.936727 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.936802 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.936824 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.936857 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:46 crc kubenswrapper[4750]: I0214 13:53:46.936880 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:46Z","lastTransitionTime":"2026-02-14T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.039961 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.040024 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.040047 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.040078 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.040101 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.143614 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.143700 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.143713 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.143734 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.143748 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.246766 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.246866 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.246926 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.246948 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.246963 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.350349 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.350423 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.350450 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.350479 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.350503 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.455404 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.455481 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.455506 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.455541 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.455566 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.498845 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:16:58.37779946 +0000 UTC Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.558933 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.558980 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.558993 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.559016 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.559029 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.663093 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.663205 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.663228 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.663259 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.663277 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.740991 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.741031 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:47 crc kubenswrapper[4750]: E0214 13:53:47.741252 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:47 crc kubenswrapper[4750]: E0214 13:53:47.741442 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.767362 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.767432 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.767466 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.767492 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.767511 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.870724 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.870805 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.870829 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.870861 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.870886 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.974274 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.974353 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.974378 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.974407 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:47 crc kubenswrapper[4750]: I0214 13:53:47.974430 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:47Z","lastTransitionTime":"2026-02-14T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.077538 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.077614 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.077635 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.077664 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.077687 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:48Z","lastTransitionTime":"2026-02-14T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.180965 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.181041 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.181070 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.181098 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.181159 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:48Z","lastTransitionTime":"2026-02-14T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.209540 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.209647 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.209665 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.209690 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.209712 4750 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T13:53:48Z","lastTransitionTime":"2026-02-14T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.287221 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95"] Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.287840 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.290377 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.290407 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.291102 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.291344 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.326923 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podStartSLOduration=79.32686897 podStartE2EDuration="1m19.32686897s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.308702994 +0000 UTC m=+100.334692495" watchObservedRunningTime="2026-02-14 13:53:48.32686897 +0000 UTC m=+100.352858461" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.327273 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-n59sl" podStartSLOduration=79.32726195 podStartE2EDuration="1m19.32726195s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.326792898 +0000 UTC m=+100.352782419" watchObservedRunningTime="2026-02-14 13:53:48.32726195 +0000 UTC m=+100.353251441" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.366528 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.366607 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.366723 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.366841 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.366916 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.383591 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.383566208 podStartE2EDuration="1m14.383566208s" podCreationTimestamp="2026-02-14 13:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.362212765 +0000 UTC m=+100.388202256" watchObservedRunningTime="2026-02-14 13:53:48.383566208 +0000 UTC m=+100.409555699" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.461034 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-78wgq" podStartSLOduration=79.461005752 podStartE2EDuration="1m19.461005752s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.460917059 +0000 UTC m=+100.486906550" watchObservedRunningTime="2026-02-14 13:53:48.461005752 +0000 UTC m=+100.486995233" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.467804 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.467866 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.467905 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.467963 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.467991 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.468059 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.468096 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.469637 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.476905 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.488445 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.48842567 podStartE2EDuration="1m19.48842567s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.4876784 +0000 UTC m=+100.513667901" watchObservedRunningTime="2026-02-14 13:53:48.48842567 +0000 UTC m=+100.514415151" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.494334 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ee0df5-aeb2-43e1-bfc0-230e93aefd70-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8gj95\" (UID: \"76ee0df5-aeb2-43e1-bfc0-230e93aefd70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.500257 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:52:42.500746 +0000 UTC Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.500365 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.507917 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.507884631 podStartE2EDuration="48.507884631s" podCreationTimestamp="2026-02-14 13:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.506976337 +0000 UTC m=+100.532965858" watchObservedRunningTime="2026-02-14 13:53:48.507884631 +0000 UTC m=+100.533874142" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.514969 4750 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.526580 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jd2lx" podStartSLOduration=79.526550501 podStartE2EDuration="1m19.526550501s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.52615885 +0000 UTC m=+100.552148351" watchObservedRunningTime="2026-02-14 13:53:48.526550501 +0000 UTC m=+100.552539982" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.586709 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bf5d6" podStartSLOduration=78.586669942 podStartE2EDuration="1m18.586669942s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.572755492 +0000 UTC m=+100.598744983" watchObservedRunningTime="2026-02-14 13:53:48.586669942 +0000 UTC m=+100.612659463" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.587265 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.587254488 podStartE2EDuration="22.587254488s" podCreationTimestamp="2026-02-14 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.58584991 +0000 UTC m=+100.611839401" watchObservedRunningTime="2026-02-14 13:53:48.587254488 +0000 UTC m=+100.613243999" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.597753 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-78xwc" podStartSLOduration=79.597736584 podStartE2EDuration="1m19.597736584s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.596778028 +0000 UTC m=+100.622767519" watchObservedRunningTime="2026-02-14 13:53:48.597736584 +0000 UTC m=+100.623726105" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.609431 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.667921 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.667889319 podStartE2EDuration="9.667889319s" podCreationTimestamp="2026-02-14 13:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:48.652499499 +0000 UTC m=+100.678489000" watchObservedRunningTime="2026-02-14 13:53:48.667889319 +0000 UTC m=+100.693878820" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.741575 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:48 crc kubenswrapper[4750]: E0214 13:53:48.743255 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.743414 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:48 crc kubenswrapper[4750]: E0214 13:53:48.743643 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:48 crc kubenswrapper[4750]: I0214 13:53:48.972887 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:48 crc kubenswrapper[4750]: E0214 13:53:48.973238 4750 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:53:48 crc kubenswrapper[4750]: E0214 13:53:48.973375 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs podName:29305ecd-7a38-4ed0-b02e-b391e5487699 nodeName:}" failed. No retries permitted until 2026-02-14 13:54:52.973346678 +0000 UTC m=+164.999336199 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs") pod "network-metrics-daemon-l6hd4" (UID: "29305ecd-7a38-4ed0-b02e-b391e5487699") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 13:53:49 crc kubenswrapper[4750]: I0214 13:53:49.610964 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" event={"ID":"76ee0df5-aeb2-43e1-bfc0-230e93aefd70","Type":"ContainerStarted","Data":"75dc3d403ec973f1aa424cb826a463032c3679c50260c44de0ec06af116444b8"} Feb 14 13:53:49 crc kubenswrapper[4750]: I0214 13:53:49.611013 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" event={"ID":"76ee0df5-aeb2-43e1-bfc0-230e93aefd70","Type":"ContainerStarted","Data":"2253403349a52aa1f15710e80a98f7c699480201f29d857852431363882cf7b3"} Feb 14 13:53:49 crc kubenswrapper[4750]: I0214 13:53:49.629593 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8gj95" podStartSLOduration=79.629570593 podStartE2EDuration="1m19.629570593s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:53:49.629062509 +0000 UTC m=+101.655052010" watchObservedRunningTime="2026-02-14 13:53:49.629570593 +0000 UTC m=+101.655560094" Feb 14 13:53:49 crc kubenswrapper[4750]: I0214 13:53:49.741257 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:49 crc kubenswrapper[4750]: I0214 13:53:49.741824 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:49 crc kubenswrapper[4750]: E0214 13:53:49.742094 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:49 crc kubenswrapper[4750]: E0214 13:53:49.742519 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:49 crc kubenswrapper[4750]: I0214 13:53:49.743541 4750 scope.go:117] "RemoveContainer" containerID="69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71" Feb 14 13:53:49 crc kubenswrapper[4750]: E0214 13:53:49.743778 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" Feb 14 13:53:50 crc kubenswrapper[4750]: I0214 13:53:50.741469 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:50 crc kubenswrapper[4750]: I0214 13:53:50.741583 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:50 crc kubenswrapper[4750]: E0214 13:53:50.741653 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:50 crc kubenswrapper[4750]: E0214 13:53:50.741798 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:51 crc kubenswrapper[4750]: I0214 13:53:51.741758 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:51 crc kubenswrapper[4750]: I0214 13:53:51.741822 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:51 crc kubenswrapper[4750]: E0214 13:53:51.742086 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:51 crc kubenswrapper[4750]: E0214 13:53:51.742250 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:52 crc kubenswrapper[4750]: I0214 13:53:52.741445 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:52 crc kubenswrapper[4750]: I0214 13:53:52.741471 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:52 crc kubenswrapper[4750]: E0214 13:53:52.741983 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:52 crc kubenswrapper[4750]: E0214 13:53:52.742053 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:53 crc kubenswrapper[4750]: I0214 13:53:53.741192 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:53 crc kubenswrapper[4750]: I0214 13:53:53.741242 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:53 crc kubenswrapper[4750]: E0214 13:53:53.741334 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:53 crc kubenswrapper[4750]: E0214 13:53:53.741495 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:54 crc kubenswrapper[4750]: I0214 13:53:54.741198 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:54 crc kubenswrapper[4750]: E0214 13:53:54.741454 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:54 crc kubenswrapper[4750]: I0214 13:53:54.741791 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:54 crc kubenswrapper[4750]: E0214 13:53:54.742097 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:55 crc kubenswrapper[4750]: I0214 13:53:55.740956 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:55 crc kubenswrapper[4750]: I0214 13:53:55.741026 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:55 crc kubenswrapper[4750]: E0214 13:53:55.741184 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:55 crc kubenswrapper[4750]: E0214 13:53:55.741373 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:56 crc kubenswrapper[4750]: I0214 13:53:56.741470 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:56 crc kubenswrapper[4750]: I0214 13:53:56.741513 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:56 crc kubenswrapper[4750]: E0214 13:53:56.741704 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:56 crc kubenswrapper[4750]: E0214 13:53:56.741788 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:57 crc kubenswrapper[4750]: I0214 13:53:57.741189 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:57 crc kubenswrapper[4750]: I0214 13:53:57.741300 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:57 crc kubenswrapper[4750]: E0214 13:53:57.741383 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:57 crc kubenswrapper[4750]: E0214 13:53:57.741515 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:53:58 crc kubenswrapper[4750]: I0214 13:53:58.741283 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:53:58 crc kubenswrapper[4750]: I0214 13:53:58.741329 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:53:58 crc kubenswrapper[4750]: E0214 13:53:58.743172 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:53:58 crc kubenswrapper[4750]: E0214 13:53:58.743272 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:53:59 crc kubenswrapper[4750]: I0214 13:53:59.741906 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:53:59 crc kubenswrapper[4750]: E0214 13:53:59.742033 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:53:59 crc kubenswrapper[4750]: I0214 13:53:59.741931 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:53:59 crc kubenswrapper[4750]: E0214 13:53:59.743587 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:00 crc kubenswrapper[4750]: I0214 13:54:00.741760 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:00 crc kubenswrapper[4750]: I0214 13:54:00.741896 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:00 crc kubenswrapper[4750]: E0214 13:54:00.741957 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:00 crc kubenswrapper[4750]: E0214 13:54:00.742166 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:01 crc kubenswrapper[4750]: I0214 13:54:01.741738 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:01 crc kubenswrapper[4750]: I0214 13:54:01.741784 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:01 crc kubenswrapper[4750]: E0214 13:54:01.741939 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:01 crc kubenswrapper[4750]: E0214 13:54:01.742055 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:01 crc kubenswrapper[4750]: I0214 13:54:01.742827 4750 scope.go:117] "RemoveContainer" containerID="69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71" Feb 14 13:54:01 crc kubenswrapper[4750]: E0214 13:54:01.742980 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4ct5_openshift-ovn-kubernetes(06beb41c-7a86-45c1-85c2-c4f9543961ea)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" Feb 14 13:54:02 crc kubenswrapper[4750]: I0214 13:54:02.741623 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:02 crc kubenswrapper[4750]: I0214 13:54:02.741872 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:02 crc kubenswrapper[4750]: E0214 13:54:02.742008 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:02 crc kubenswrapper[4750]: E0214 13:54:02.742421 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:03 crc kubenswrapper[4750]: I0214 13:54:03.741324 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:03 crc kubenswrapper[4750]: E0214 13:54:03.741494 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:03 crc kubenswrapper[4750]: I0214 13:54:03.741563 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:03 crc kubenswrapper[4750]: E0214 13:54:03.741778 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:04 crc kubenswrapper[4750]: I0214 13:54:04.674573 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/1.log" Feb 14 13:54:04 crc kubenswrapper[4750]: I0214 13:54:04.676259 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/0.log" Feb 14 13:54:04 crc kubenswrapper[4750]: I0214 13:54:04.676388 4750 generic.go:334] "Generic (PLEG): container finished" podID="7475461f-e0e5-4d5e-91fd-bfe8fb575146" containerID="ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300" exitCode=1 Feb 14 13:54:04 crc kubenswrapper[4750]: I0214 13:54:04.676443 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n59sl" event={"ID":"7475461f-e0e5-4d5e-91fd-bfe8fb575146","Type":"ContainerDied","Data":"ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300"} Feb 14 13:54:04 crc kubenswrapper[4750]: I0214 13:54:04.676500 4750 scope.go:117] "RemoveContainer" containerID="08be925470ed77ca24505a0cb22d0935a24fc899d60d8a5e5ef3320b43e6d202" Feb 14 13:54:04 crc kubenswrapper[4750]: I0214 13:54:04.677027 4750 scope.go:117] "RemoveContainer" containerID="ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300" Feb 14 13:54:04 crc kubenswrapper[4750]: E0214 13:54:04.677341 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-n59sl_openshift-multus(7475461f-e0e5-4d5e-91fd-bfe8fb575146)\"" pod="openshift-multus/multus-n59sl" podUID="7475461f-e0e5-4d5e-91fd-bfe8fb575146" Feb 14 13:54:04 crc kubenswrapper[4750]: I0214 13:54:04.741763 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:04 crc kubenswrapper[4750]: I0214 13:54:04.741821 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:04 crc kubenswrapper[4750]: E0214 13:54:04.741943 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:04 crc kubenswrapper[4750]: E0214 13:54:04.742265 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:05 crc kubenswrapper[4750]: I0214 13:54:05.682297 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/1.log" Feb 14 13:54:05 crc kubenswrapper[4750]: I0214 13:54:05.741762 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:05 crc kubenswrapper[4750]: I0214 13:54:05.741861 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:05 crc kubenswrapper[4750]: E0214 13:54:05.741987 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:05 crc kubenswrapper[4750]: E0214 13:54:05.742145 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:06 crc kubenswrapper[4750]: I0214 13:54:06.741333 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:06 crc kubenswrapper[4750]: I0214 13:54:06.741493 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:06 crc kubenswrapper[4750]: E0214 13:54:06.741805 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:06 crc kubenswrapper[4750]: E0214 13:54:06.741865 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:07 crc kubenswrapper[4750]: I0214 13:54:07.740829 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:07 crc kubenswrapper[4750]: I0214 13:54:07.740905 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:07 crc kubenswrapper[4750]: E0214 13:54:07.741044 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:07 crc kubenswrapper[4750]: E0214 13:54:07.741198 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:08 crc kubenswrapper[4750]: E0214 13:54:08.672785 4750 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 14 13:54:08 crc kubenswrapper[4750]: I0214 13:54:08.741410 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:08 crc kubenswrapper[4750]: E0214 13:54:08.747681 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:08 crc kubenswrapper[4750]: I0214 13:54:08.747806 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:08 crc kubenswrapper[4750]: E0214 13:54:08.748070 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:09 crc kubenswrapper[4750]: E0214 13:54:09.495461 4750 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 13:54:09 crc kubenswrapper[4750]: I0214 13:54:09.741742 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:09 crc kubenswrapper[4750]: I0214 13:54:09.741778 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:09 crc kubenswrapper[4750]: E0214 13:54:09.741901 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:09 crc kubenswrapper[4750]: E0214 13:54:09.741977 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:10 crc kubenswrapper[4750]: I0214 13:54:10.741777 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:10 crc kubenswrapper[4750]: E0214 13:54:10.741978 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:10 crc kubenswrapper[4750]: I0214 13:54:10.741785 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:10 crc kubenswrapper[4750]: E0214 13:54:10.742483 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:11 crc kubenswrapper[4750]: I0214 13:54:11.741237 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:11 crc kubenswrapper[4750]: E0214 13:54:11.741443 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:11 crc kubenswrapper[4750]: I0214 13:54:11.741543 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:11 crc kubenswrapper[4750]: E0214 13:54:11.741764 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:12 crc kubenswrapper[4750]: I0214 13:54:12.740847 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:12 crc kubenswrapper[4750]: E0214 13:54:12.741062 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:12 crc kubenswrapper[4750]: I0214 13:54:12.741230 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:12 crc kubenswrapper[4750]: E0214 13:54:12.741685 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:13 crc kubenswrapper[4750]: I0214 13:54:13.741370 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:13 crc kubenswrapper[4750]: I0214 13:54:13.741437 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:13 crc kubenswrapper[4750]: E0214 13:54:13.741615 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:13 crc kubenswrapper[4750]: E0214 13:54:13.741712 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:14 crc kubenswrapper[4750]: E0214 13:54:14.496764 4750 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 13:54:14 crc kubenswrapper[4750]: I0214 13:54:14.740994 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:14 crc kubenswrapper[4750]: I0214 13:54:14.741175 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:14 crc kubenswrapper[4750]: E0214 13:54:14.741190 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:14 crc kubenswrapper[4750]: E0214 13:54:14.741382 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:15 crc kubenswrapper[4750]: I0214 13:54:15.741144 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:15 crc kubenswrapper[4750]: I0214 13:54:15.741163 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:15 crc kubenswrapper[4750]: E0214 13:54:15.741271 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:15 crc kubenswrapper[4750]: E0214 13:54:15.741380 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:15 crc kubenswrapper[4750]: I0214 13:54:15.741745 4750 scope.go:117] "RemoveContainer" containerID="ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300" Feb 14 13:54:16 crc kubenswrapper[4750]: I0214 13:54:16.731016 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/1.log" Feb 14 13:54:16 crc kubenswrapper[4750]: I0214 13:54:16.731298 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n59sl" event={"ID":"7475461f-e0e5-4d5e-91fd-bfe8fb575146","Type":"ContainerStarted","Data":"990256c8f115726952dc644dd7da82c2b60c46ad99f40b0f2d44563cc83e28be"} Feb 14 13:54:16 crc kubenswrapper[4750]: I0214 13:54:16.740977 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:16 crc kubenswrapper[4750]: E0214 13:54:16.741235 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:16 crc kubenswrapper[4750]: I0214 13:54:16.741253 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:16 crc kubenswrapper[4750]: E0214 13:54:16.741801 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:16 crc kubenswrapper[4750]: I0214 13:54:16.741990 4750 scope.go:117] "RemoveContainer" containerID="69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71" Feb 14 13:54:17 crc kubenswrapper[4750]: I0214 13:54:17.737491 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/3.log" Feb 14 13:54:17 crc kubenswrapper[4750]: I0214 13:54:17.740725 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:17 crc kubenswrapper[4750]: I0214 13:54:17.740726 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:17 crc kubenswrapper[4750]: E0214 13:54:17.740895 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:17 crc kubenswrapper[4750]: E0214 13:54:17.741134 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:17 crc kubenswrapper[4750]: I0214 13:54:17.741740 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerStarted","Data":"e6df6013586eb2af879223e68fa447672996885c6ff49d92f45f4092caafce33"} Feb 14 13:54:17 crc kubenswrapper[4750]: I0214 13:54:17.742255 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:54:17 crc kubenswrapper[4750]: I0214 13:54:17.780762 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l6hd4"] Feb 14 13:54:18 crc kubenswrapper[4750]: I0214 13:54:18.740959 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:18 crc kubenswrapper[4750]: I0214 13:54:18.741394 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:18 crc kubenswrapper[4750]: E0214 13:54:18.742743 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:18 crc kubenswrapper[4750]: E0214 13:54:18.742947 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:18 crc kubenswrapper[4750]: I0214 13:54:18.746546 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:18 crc kubenswrapper[4750]: E0214 13:54:18.746736 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:19 crc kubenswrapper[4750]: E0214 13:54:19.498899 4750 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 13:54:19 crc kubenswrapper[4750]: I0214 13:54:19.741330 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:19 crc kubenswrapper[4750]: E0214 13:54:19.741875 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:20 crc kubenswrapper[4750]: I0214 13:54:20.741268 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:20 crc kubenswrapper[4750]: I0214 13:54:20.741433 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:20 crc kubenswrapper[4750]: E0214 13:54:20.741619 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:20 crc kubenswrapper[4750]: I0214 13:54:20.741659 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:20 crc kubenswrapper[4750]: E0214 13:54:20.741750 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:20 crc kubenswrapper[4750]: E0214 13:54:20.741844 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:21 crc kubenswrapper[4750]: I0214 13:54:21.741571 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:21 crc kubenswrapper[4750]: E0214 13:54:21.741789 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:22 crc kubenswrapper[4750]: I0214 13:54:22.741252 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:22 crc kubenswrapper[4750]: I0214 13:54:22.741279 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:22 crc kubenswrapper[4750]: E0214 13:54:22.741443 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 13:54:22 crc kubenswrapper[4750]: E0214 13:54:22.741572 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 13:54:22 crc kubenswrapper[4750]: I0214 13:54:22.741836 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:22 crc kubenswrapper[4750]: E0214 13:54:22.742032 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l6hd4" podUID="29305ecd-7a38-4ed0-b02e-b391e5487699" Feb 14 13:54:23 crc kubenswrapper[4750]: I0214 13:54:23.741192 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:23 crc kubenswrapper[4750]: E0214 13:54:23.741372 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 13:54:24 crc kubenswrapper[4750]: I0214 13:54:24.741661 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:24 crc kubenswrapper[4750]: I0214 13:54:24.741724 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:24 crc kubenswrapper[4750]: I0214 13:54:24.741693 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:24 crc kubenswrapper[4750]: I0214 13:54:24.746182 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 13:54:24 crc kubenswrapper[4750]: I0214 13:54:24.749684 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 13:54:24 crc kubenswrapper[4750]: I0214 13:54:24.749921 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 13:54:24 crc kubenswrapper[4750]: I0214 13:54:24.753755 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 13:54:24 crc kubenswrapper[4750]: I0214 13:54:24.754310 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 13:54:24 crc kubenswrapper[4750]: I0214 13:54:24.756453 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 13:54:25 crc kubenswrapper[4750]: I0214 13:54:25.741858 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.744089 4750 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.795221 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podStartSLOduration=118.795172794 podStartE2EDuration="1m58.795172794s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:17.795537845 +0000 UTC m=+129.821527356" watchObservedRunningTime="2026-02-14 13:54:28.795172794 +0000 UTC m=+140.821162335" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.798564 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-sbvl5"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.800736 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.809504 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.815871 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ktrwb"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.816997 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.819637 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f84rf"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.820098 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.820150 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.820097 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.823937 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.824209 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.824729 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.831290 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.831614 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.832035 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.832312 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.832423 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.833088 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.833297 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.835459 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.836411 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.836839 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.838111 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.838476 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.838967 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.839641 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.839926 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.840405 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.841621 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.842094 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.842104 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.860888 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.861209 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.861821 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.861967 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.862542 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.862669 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.862776 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.862872 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.862974 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.863002 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.863074 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.863087 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.863197 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.863213 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.863214 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.863249 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.863273 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.863197 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.864939 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.865666 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.871040 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kzvww"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.871937 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-66qtn"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.872501 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.872541 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.877583 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.877694 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.877702 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.877851 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.877769 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.877919 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.878060 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.878107 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.880761 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.883757 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.884563 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.903320 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.911482 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.911617 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.912267 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.912347 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.912600 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.928946 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.929756 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.930749 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.930889 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.931155 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.931379 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.931444 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.931705 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.932199 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.932565 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.932026 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.937013 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.945804 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.945897 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.945950 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.946053 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.946190 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.946233 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.946463 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.946528 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.946752 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.946794 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.954662 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.961745 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2p4bf"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.964290 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.965195 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dfv6f"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.964734 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.966461 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.963066 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.967276 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qccxx"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.967544 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.967754 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dfv6f" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.967804 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.969152 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.969493 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.969967 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.969984 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mfrd4"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.970332 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.970466 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.970947 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.971177 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.971420 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.971727 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.971941 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.971985 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-dir\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972013 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-etcd-serving-ca\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972036 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19eb8e0f-83bc-40d2-a994-ba669171915e-images\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972058 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fe262ca-0856-4067-9ce3-13eeaf1f8768-etcd-client\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972081 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/456beeb0-9309-48a6-9985-a0f57a40c10b-node-pullsecrets\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972104 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972155 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972164 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972182 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n2r4\" (UniqueName: \"kubernetes.io/projected/19eb8e0f-83bc-40d2-a994-ba669171915e-kube-api-access-8n2r4\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972204 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972225 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82e1e327-0874-4dfa-848f-991c961edc93-config\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972251 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972266 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78c212db-b819-4b16-80ba-31aa1d95d3e2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-24fsk\" (UID: \"78c212db-b819-4b16-80ba-31aa1d95d3e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972289 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrcjh\" (UniqueName: \"kubernetes.io/projected/ed95f649-6925-4589-b1fe-90d10d2b266a-kube-api-access-mrcjh\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972330 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972349 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972376 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/12522fe4-a583-49f0-baa1-2b6420bfd351-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pk4fk\" (UID: \"12522fe4-a583-49f0-baa1-2b6420bfd351\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972408 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-client-ca\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972431 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2hpv\" (UniqueName: \"kubernetes.io/projected/12522fe4-a583-49f0-baa1-2b6420bfd351-kube-api-access-k2hpv\") pod \"cluster-samples-operator-665b6dd947-pk4fk\" (UID: \"12522fe4-a583-49f0-baa1-2b6420bfd351\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972456 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-audit\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972479 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/456beeb0-9309-48a6-9985-a0f57a40c10b-audit-dir\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972503 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456beeb0-9309-48a6-9985-a0f57a40c10b-serving-cert\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972558 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972642 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972709 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.972960 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2zdhw"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.974913 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.974967 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.974990 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fe262ca-0856-4067-9ce3-13eeaf1f8768-audit-policies\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975014 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjtq\" (UniqueName: \"kubernetes.io/projected/6fe262ca-0856-4067-9ce3-13eeaf1f8768-kube-api-access-8mjtq\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975041 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82e1e327-0874-4dfa-848f-991c961edc93-serving-cert\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975063 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed95f649-6925-4589-b1fe-90d10d2b266a-serving-cert\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975086 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82e1e327-0874-4dfa-848f-991c961edc93-service-ca-bundle\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975146 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62jrt\" (UniqueName: \"kubernetes.io/projected/78c212db-b819-4b16-80ba-31aa1d95d3e2-kube-api-access-62jrt\") pod \"openshift-apiserver-operator-796bbdcf4f-24fsk\" (UID: \"78c212db-b819-4b16-80ba-31aa1d95d3e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975170 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fe262ca-0856-4067-9ce3-13eeaf1f8768-audit-dir\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975195 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c212db-b819-4b16-80ba-31aa1d95d3e2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-24fsk\" (UID: \"78c212db-b819-4b16-80ba-31aa1d95d3e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975225 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fe262ca-0856-4067-9ce3-13eeaf1f8768-serving-cert\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975272 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rrrq\" (UniqueName: \"kubernetes.io/projected/81c0849a-dd49-44b8-a94d-ad1138ab0246-kube-api-access-9rrrq\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975299 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/456beeb0-9309-48a6-9985-a0f57a40c10b-encryption-config\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975343 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-config\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975365 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/456beeb0-9309-48a6-9985-a0f57a40c10b-etcd-client\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975392 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975417 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6t54\" (UniqueName: \"kubernetes.io/projected/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-kube-api-access-r6t54\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975438 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82e1e327-0874-4dfa-848f-991c961edc93-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975465 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975487 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-client-ca\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975510 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-policies\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975551 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975576 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fe262ca-0856-4067-9ce3-13eeaf1f8768-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975602 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fe262ca-0856-4067-9ce3-13eeaf1f8768-encryption-config\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975628 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19eb8e0f-83bc-40d2-a994-ba669171915e-config\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975665 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe262ca-0856-4067-9ce3-13eeaf1f8768-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975694 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-image-import-ca\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975720 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c0849a-dd49-44b8-a94d-ad1138ab0246-serving-cert\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975800 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsp8m\" (UniqueName: \"kubernetes.io/projected/82e1e327-0874-4dfa-848f-991c961edc93-kube-api-access-nsp8m\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975829 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jsdw\" (UniqueName: \"kubernetes.io/projected/456beeb0-9309-48a6-9985-a0f57a40c10b-kube-api-access-2jsdw\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975852 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-config\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975875 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975896 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/19eb8e0f-83bc-40d2-a994-ba669171915e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975918 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-config\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975940 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.975965 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.978376 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.982239 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.982260 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.985319 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.985368 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.985471 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.985590 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.985762 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.977863 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.991756 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.992209 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.992328 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.993079 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.993680 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.994045 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.994191 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.994359 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.996258 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.996779 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.997503 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.997798 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.997904 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9"] Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.998199 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.998376 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.998503 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 13:54:28 crc kubenswrapper[4750]: I0214 13:54:28.999841 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.000299 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.001941 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.001493 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.003703 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.001725 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.001887 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.003477 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.005291 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.010795 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.010825 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.010960 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.012225 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.013883 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.017096 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-66qtn"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.017175 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.017402 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.017603 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pcdmn"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.018591 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.018975 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.019198 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nt44b"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.019317 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.019613 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.019805 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.020559 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.023045 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49gzj"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.023727 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.024701 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rw8px"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.025174 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.025954 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.026478 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v7pb5"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.027243 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.028577 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.029007 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.030856 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5ptgn"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.031590 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.032253 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.033051 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.033839 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ktrwb"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.036261 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.037421 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.038828 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kzvww"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.039972 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2p4bf"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.041359 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f84rf"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.042733 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-sbvl5"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.043897 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lh5rn"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.045226 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.045334 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.046864 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5ptgn"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.047186 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.047850 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.049010 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.051323 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.053655 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.058847 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.061292 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dfv6f"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.062765 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.065790 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.066675 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.075513 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2zdhw"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076521 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076557 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx9f\" (UniqueName: \"kubernetes.io/projected/0ea9b0e9-7732-4e69-98a7-208a8626682c-kube-api-access-qxx9f\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076579 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-dir\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076595 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19eb8e0f-83bc-40d2-a994-ba669171915e-images\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076611 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fe262ca-0856-4067-9ce3-13eeaf1f8768-etcd-client\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076628 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd3b81a8-83ce-488e-b41d-eed56bc4bc6f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dz45f\" (UID: \"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076645 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-etcd-serving-ca\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076662 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076678 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea9b0e9-7732-4e69-98a7-208a8626682c-config\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076699 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4643208-f942-4075-aa65-178bc272c07d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076719 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/456beeb0-9309-48a6-9985-a0f57a40c10b-node-pullsecrets\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076743 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076747 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-dir\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076762 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n2r4\" (UniqueName: \"kubernetes.io/projected/19eb8e0f-83bc-40d2-a994-ba669171915e-kube-api-access-8n2r4\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076783 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4243e3e9-15c2-46a0-a695-5520b5921af0-auth-proxy-config\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076806 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tmjs\" (UniqueName: \"kubernetes.io/projected/4243e3e9-15c2-46a0-a695-5520b5921af0-kube-api-access-2tmjs\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076826 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4643208-f942-4075-aa65-178bc272c07d-metrics-tls\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076850 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076868 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82e1e327-0874-4dfa-848f-991c961edc93-config\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076893 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78c212db-b819-4b16-80ba-31aa1d95d3e2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-24fsk\" (UID: \"78c212db-b819-4b16-80ba-31aa1d95d3e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076912 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/12522fe4-a583-49f0-baa1-2b6420bfd351-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pk4fk\" (UID: \"12522fe4-a583-49f0-baa1-2b6420bfd351\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076936 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrcjh\" (UniqueName: \"kubernetes.io/projected/ed95f649-6925-4589-b1fe-90d10d2b266a-kube-api-access-mrcjh\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076956 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076972 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd3b81a8-83ce-488e-b41d-eed56bc4bc6f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dz45f\" (UID: \"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.076988 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lb86\" (UID: \"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077005 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-client-ca\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077022 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea9b0e9-7732-4e69-98a7-208a8626682c-serving-cert\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077037 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-audit\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077053 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/456beeb0-9309-48a6-9985-a0f57a40c10b-audit-dir\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077068 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2hpv\" (UniqueName: \"kubernetes.io/projected/12522fe4-a583-49f0-baa1-2b6420bfd351-kube-api-access-k2hpv\") pod \"cluster-samples-operator-665b6dd947-pk4fk\" (UID: \"12522fe4-a583-49f0-baa1-2b6420bfd351\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077085 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fe262ca-0856-4067-9ce3-13eeaf1f8768-audit-policies\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077100 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mjtq\" (UniqueName: \"kubernetes.io/projected/6fe262ca-0856-4067-9ce3-13eeaf1f8768-kube-api-access-8mjtq\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077152 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37420801-d7c5-4fe1-82e4-557f2c7f68e7-config\") pod \"kube-apiserver-operator-766d6c64bb-72jbc\" (UID: \"37420801-d7c5-4fe1-82e4-557f2c7f68e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077176 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456beeb0-9309-48a6-9985-a0f57a40c10b-serving-cert\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077192 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077209 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077225 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82e1e327-0874-4dfa-848f-991c961edc93-serving-cert\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077241 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37420801-d7c5-4fe1-82e4-557f2c7f68e7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-72jbc\" (UID: \"37420801-d7c5-4fe1-82e4-557f2c7f68e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077259 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed95f649-6925-4589-b1fe-90d10d2b266a-serving-cert\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077275 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82e1e327-0874-4dfa-848f-991c961edc93-service-ca-bundle\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077299 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62jrt\" (UniqueName: \"kubernetes.io/projected/78c212db-b819-4b16-80ba-31aa1d95d3e2-kube-api-access-62jrt\") pod \"openshift-apiserver-operator-796bbdcf4f-24fsk\" (UID: \"78c212db-b819-4b16-80ba-31aa1d95d3e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077315 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fe262ca-0856-4067-9ce3-13eeaf1f8768-audit-dir\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077332 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rrrq\" (UniqueName: \"kubernetes.io/projected/81c0849a-dd49-44b8-a94d-ad1138ab0246-kube-api-access-9rrrq\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077346 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c212db-b819-4b16-80ba-31aa1d95d3e2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-24fsk\" (UID: \"78c212db-b819-4b16-80ba-31aa1d95d3e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077361 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fe262ca-0856-4067-9ce3-13eeaf1f8768-serving-cert\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077383 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/456beeb0-9309-48a6-9985-a0f57a40c10b-encryption-config\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077400 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4243e3e9-15c2-46a0-a695-5520b5921af0-machine-approver-tls\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077416 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4643208-f942-4075-aa65-178bc272c07d-trusted-ca\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077434 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7hcq\" (UniqueName: \"kubernetes.io/projected/b4643208-f942-4075-aa65-178bc272c07d-kube-api-access-k7hcq\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077451 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/456beeb0-9309-48a6-9985-a0f57a40c10b-etcd-client\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077458 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-etcd-serving-ca\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077466 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4243e3e9-15c2-46a0-a695-5520b5921af0-config\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077489 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-config\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077505 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077521 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-client-ca\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077537 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-policies\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077551 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6t54\" (UniqueName: \"kubernetes.io/projected/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-kube-api-access-r6t54\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077568 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82e1e327-0874-4dfa-848f-991c961edc93-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077585 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077603 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3b81a8-83ce-488e-b41d-eed56bc4bc6f-config\") pod \"kube-controller-manager-operator-78b949d7b-dz45f\" (UID: \"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077618 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ea9b0e9-7732-4e69-98a7-208a8626682c-trusted-ca\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077637 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077652 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fe262ca-0856-4067-9ce3-13eeaf1f8768-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077668 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lb86\" (UID: \"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077687 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37420801-d7c5-4fe1-82e4-557f2c7f68e7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-72jbc\" (UID: \"37420801-d7c5-4fe1-82e4-557f2c7f68e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077703 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fe262ca-0856-4067-9ce3-13eeaf1f8768-encryption-config\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077721 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19eb8e0f-83bc-40d2-a994-ba669171915e-config\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077743 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-image-import-ca\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077759 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c0849a-dd49-44b8-a94d-ad1138ab0246-serving-cert\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077775 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe262ca-0856-4067-9ce3-13eeaf1f8768-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077792 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsp8m\" (UniqueName: \"kubernetes.io/projected/82e1e327-0874-4dfa-848f-991c961edc93-kube-api-access-nsp8m\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077810 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077856 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/19eb8e0f-83bc-40d2-a994-ba669171915e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077906 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpfvk\" (UniqueName: \"kubernetes.io/projected/f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d-kube-api-access-jpfvk\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lb86\" (UID: \"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077926 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jsdw\" (UniqueName: \"kubernetes.io/projected/456beeb0-9309-48a6-9985-a0f57a40c10b-kube-api-access-2jsdw\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077943 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-config\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077960 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-config\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077977 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.077993 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.078621 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/456beeb0-9309-48a6-9985-a0f57a40c10b-node-pullsecrets\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.078731 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qccxx"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.078953 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.079333 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c212db-b819-4b16-80ba-31aa1d95d3e2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-24fsk\" (UID: \"78c212db-b819-4b16-80ba-31aa1d95d3e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.079436 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19eb8e0f-83bc-40d2-a994-ba669171915e-images\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.079919 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.079968 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.080018 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82e1e327-0874-4dfa-848f-991c961edc93-config\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.080169 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82e1e327-0874-4dfa-848f-991c961edc93-service-ca-bundle\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.080230 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fe262ca-0856-4067-9ce3-13eeaf1f8768-audit-dir\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.080533 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-client-ca\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.080874 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-audit\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.080944 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/456beeb0-9309-48a6-9985-a0f57a40c10b-audit-dir\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.081220 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fe262ca-0856-4067-9ce3-13eeaf1f8768-audit-policies\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.081699 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fe262ca-0856-4067-9ce3-13eeaf1f8768-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.081790 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.081808 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-client-ca\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.082403 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-policies\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.082613 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-config\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.083376 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-image-import-ca\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.083379 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82e1e327-0874-4dfa-848f-991c961edc93-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.083842 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-config\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.083931 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.083998 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.084029 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.084095 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19eb8e0f-83bc-40d2-a994-ba669171915e-config\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.084176 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.084385 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.084569 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456beeb0-9309-48a6-9985-a0f57a40c10b-config\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.084962 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.085233 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.085496 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pcdmn"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.085587 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe262ca-0856-4067-9ce3-13eeaf1f8768-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.085832 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/19eb8e0f-83bc-40d2-a994-ba669171915e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.086334 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.086454 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed95f649-6925-4589-b1fe-90d10d2b266a-serving-cert\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.088072 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.088086 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.088332 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.088482 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/456beeb0-9309-48a6-9985-a0f57a40c10b-encryption-config\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.088833 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.089193 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c0849a-dd49-44b8-a94d-ad1138ab0246-serving-cert\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.089240 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.092474 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.092522 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mfrd4"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.092536 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.095620 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.097082 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.097137 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78c212db-b819-4b16-80ba-31aa1d95d3e2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-24fsk\" (UID: \"78c212db-b819-4b16-80ba-31aa1d95d3e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.097379 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fe262ca-0856-4067-9ce3-13eeaf1f8768-encryption-config\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.098213 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v7pb5"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.098685 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fe262ca-0856-4067-9ce3-13eeaf1f8768-etcd-client\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.098747 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.099070 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.099813 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/456beeb0-9309-48a6-9985-a0f57a40c10b-etcd-client\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.100198 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lh5rn"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.100787 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456beeb0-9309-48a6-9985-a0f57a40c10b-serving-cert\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.100818 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82e1e327-0874-4dfa-848f-991c961edc93-serving-cert\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.101183 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fe262ca-0856-4067-9ce3-13eeaf1f8768-serving-cert\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.101541 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49gzj"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.102619 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.103247 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/12522fe4-a583-49f0-baa1-2b6420bfd351-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pk4fk\" (UID: \"12522fe4-a583-49f0-baa1-2b6420bfd351\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.107708 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9nxpw"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.108390 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rw8px"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.108413 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qhxgj"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.108965 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.111484 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.111765 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9nxpw" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.112174 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qhxgj"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.112202 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9nxpw"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.112932 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2gvbw"] Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.113374 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.125873 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.147227 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.165984 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179092 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxx9f\" (UniqueName: \"kubernetes.io/projected/0ea9b0e9-7732-4e69-98a7-208a8626682c-kube-api-access-qxx9f\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179156 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd3b81a8-83ce-488e-b41d-eed56bc4bc6f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dz45f\" (UID: \"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179176 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea9b0e9-7732-4e69-98a7-208a8626682c-config\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179193 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4643208-f942-4075-aa65-178bc272c07d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179455 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4243e3e9-15c2-46a0-a695-5520b5921af0-auth-proxy-config\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179475 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tmjs\" (UniqueName: \"kubernetes.io/projected/4243e3e9-15c2-46a0-a695-5520b5921af0-kube-api-access-2tmjs\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179491 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4643208-f942-4075-aa65-178bc272c07d-metrics-tls\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179521 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd3b81a8-83ce-488e-b41d-eed56bc4bc6f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dz45f\" (UID: \"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179537 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lb86\" (UID: \"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179571 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea9b0e9-7732-4e69-98a7-208a8626682c-serving-cert\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179699 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37420801-d7c5-4fe1-82e4-557f2c7f68e7-config\") pod \"kube-apiserver-operator-766d6c64bb-72jbc\" (UID: \"37420801-d7c5-4fe1-82e4-557f2c7f68e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179749 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37420801-d7c5-4fe1-82e4-557f2c7f68e7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-72jbc\" (UID: \"37420801-d7c5-4fe1-82e4-557f2c7f68e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179816 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4243e3e9-15c2-46a0-a695-5520b5921af0-machine-approver-tls\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179836 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4643208-f942-4075-aa65-178bc272c07d-trusted-ca\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.179940 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7hcq\" (UniqueName: \"kubernetes.io/projected/b4643208-f942-4075-aa65-178bc272c07d-kube-api-access-k7hcq\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.180344 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4243e3e9-15c2-46a0-a695-5520b5921af0-config\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.180392 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3b81a8-83ce-488e-b41d-eed56bc4bc6f-config\") pod \"kube-controller-manager-operator-78b949d7b-dz45f\" (UID: \"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.180399 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea9b0e9-7732-4e69-98a7-208a8626682c-config\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.180416 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ea9b0e9-7732-4e69-98a7-208a8626682c-trusted-ca\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.180437 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lb86\" (UID: \"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.180480 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37420801-d7c5-4fe1-82e4-557f2c7f68e7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-72jbc\" (UID: \"37420801-d7c5-4fe1-82e4-557f2c7f68e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.180527 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpfvk\" (UniqueName: \"kubernetes.io/projected/f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d-kube-api-access-jpfvk\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lb86\" (UID: \"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.180708 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37420801-d7c5-4fe1-82e4-557f2c7f68e7-config\") pod \"kube-apiserver-operator-766d6c64bb-72jbc\" (UID: \"37420801-d7c5-4fe1-82e4-557f2c7f68e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.180908 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4243e3e9-15c2-46a0-a695-5520b5921af0-auth-proxy-config\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.181671 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3b81a8-83ce-488e-b41d-eed56bc4bc6f-config\") pod \"kube-controller-manager-operator-78b949d7b-dz45f\" (UID: \"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.181685 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4243e3e9-15c2-46a0-a695-5520b5921af0-config\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.182781 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ea9b0e9-7732-4e69-98a7-208a8626682c-trusted-ca\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.183082 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea9b0e9-7732-4e69-98a7-208a8626682c-serving-cert\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.183426 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd3b81a8-83ce-488e-b41d-eed56bc4bc6f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dz45f\" (UID: \"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.185239 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lb86\" (UID: \"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.186290 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.187305 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lb86\" (UID: \"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.187526 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4243e3e9-15c2-46a0-a695-5520b5921af0-machine-approver-tls\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.187914 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37420801-d7c5-4fe1-82e4-557f2c7f68e7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-72jbc\" (UID: \"37420801-d7c5-4fe1-82e4-557f2c7f68e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.207166 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.247048 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.253513 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4643208-f942-4075-aa65-178bc272c07d-metrics-tls\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.273542 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.282291 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4643208-f942-4075-aa65-178bc272c07d-trusted-ca\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.286168 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.306749 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.326803 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.347602 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.366960 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.386610 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.427896 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.448241 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.467137 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.486477 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.506815 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.527408 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.546040 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.566456 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.586863 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.606716 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.626590 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.647827 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.666469 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.687746 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.707212 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.727380 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.747453 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.766917 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.787105 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.807062 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.827788 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.847375 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.867999 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.886465 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.908089 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.927096 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.947422 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.972433 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 13:54:29 crc kubenswrapper[4750]: I0214 13:54:29.987332 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.007872 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.025425 4750 request.go:700] Waited for 1.004898521s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.027638 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.047036 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.067598 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.087658 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.107081 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.128052 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.162518 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.167405 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.187334 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.206230 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.226525 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.248695 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.266723 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.286766 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.306805 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.327358 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.346658 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.368587 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.388158 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.406835 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.428924 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.448325 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.467000 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.487848 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.507885 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.527052 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.547412 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.566856 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.586818 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.607238 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.626770 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.647374 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.666951 4750 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.686898 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.727735 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rrrq\" (UniqueName: \"kubernetes.io/projected/81c0849a-dd49-44b8-a94d-ad1138ab0246-kube-api-access-9rrrq\") pod \"route-controller-manager-6576b87f9c-vwht9\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.744727 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n2r4\" (UniqueName: \"kubernetes.io/projected/19eb8e0f-83bc-40d2-a994-ba669171915e-kube-api-access-8n2r4\") pod \"machine-api-operator-5694c8668f-kzvww\" (UID: \"19eb8e0f-83bc-40d2-a994-ba669171915e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.756542 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.763461 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrcjh\" (UniqueName: \"kubernetes.io/projected/ed95f649-6925-4589-b1fe-90d10d2b266a-kube-api-access-mrcjh\") pod \"controller-manager-879f6c89f-ktrwb\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.778882 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.790099 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62jrt\" (UniqueName: \"kubernetes.io/projected/78c212db-b819-4b16-80ba-31aa1d95d3e2-kube-api-access-62jrt\") pod \"openshift-apiserver-operator-796bbdcf4f-24fsk\" (UID: \"78c212db-b819-4b16-80ba-31aa1d95d3e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.803832 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mjtq\" (UniqueName: \"kubernetes.io/projected/6fe262ca-0856-4067-9ce3-13eeaf1f8768-kube-api-access-8mjtq\") pod \"apiserver-7bbb656c7d-c4l2n\" (UID: \"6fe262ca-0856-4067-9ce3-13eeaf1f8768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.824445 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2hpv\" (UniqueName: \"kubernetes.io/projected/12522fe4-a583-49f0-baa1-2b6420bfd351-kube-api-access-k2hpv\") pod \"cluster-samples-operator-665b6dd947-pk4fk\" (UID: \"12522fe4-a583-49f0-baa1-2b6420bfd351\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.841643 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsp8m\" (UniqueName: \"kubernetes.io/projected/82e1e327-0874-4dfa-848f-991c961edc93-kube-api-access-nsp8m\") pod \"authentication-operator-69f744f599-66qtn\" (UID: \"82e1e327-0874-4dfa-848f-991c961edc93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.869633 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jsdw\" (UniqueName: \"kubernetes.io/projected/456beeb0-9309-48a6-9985-a0f57a40c10b-kube-api-access-2jsdw\") pod \"apiserver-76f77b778f-sbvl5\" (UID: \"456beeb0-9309-48a6-9985-a0f57a40c10b\") " pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.889511 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.890429 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6t54\" (UniqueName: \"kubernetes.io/projected/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-kube-api-access-r6t54\") pod \"oauth-openshift-558db77b4-f84rf\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.906927 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.926881 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.931455 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.946757 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.957502 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.964379 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.966334 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.976434 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9"] Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.976786 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:30 crc kubenswrapper[4750]: I0214 13:54:30.986790 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.002084 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kzvww"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.007310 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.015314 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.026672 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.045765 4750 request.go:700] Waited for 1.932195751s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.050068 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.053588 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.069149 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.073753 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.116785 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxx9f\" (UniqueName: \"kubernetes.io/projected/0ea9b0e9-7732-4e69-98a7-208a8626682c-kube-api-access-qxx9f\") pod \"console-operator-58897d9998-mfrd4\" (UID: \"0ea9b0e9-7732-4e69-98a7-208a8626682c\") " pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.124266 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4643208-f942-4075-aa65-178bc272c07d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.132309 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-sbvl5"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.159339 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd3b81a8-83ce-488e-b41d-eed56bc4bc6f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dz45f\" (UID: \"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.160588 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tmjs\" (UniqueName: \"kubernetes.io/projected/4243e3e9-15c2-46a0-a695-5520b5921af0-kube-api-access-2tmjs\") pod \"machine-approver-56656f9798-x9q7n\" (UID: \"4243e3e9-15c2-46a0-a695-5520b5921af0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.165284 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.174425 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.182910 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7hcq\" (UniqueName: \"kubernetes.io/projected/b4643208-f942-4075-aa65-178bc272c07d-kube-api-access-k7hcq\") pod \"ingress-operator-5b745b69d9-5n96f\" (UID: \"b4643208-f942-4075-aa65-178bc272c07d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.185950 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.215323 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpfvk\" (UniqueName: \"kubernetes.io/projected/f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d-kube-api-access-jpfvk\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lb86\" (UID: \"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.227372 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37420801-d7c5-4fe1-82e4-557f2c7f68e7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-72jbc\" (UID: \"37420801-d7c5-4fe1-82e4-557f2c7f68e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.236013 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ktrwb"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.260277 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n"] Feb 14 13:54:31 crc kubenswrapper[4750]: W0214 13:54:31.264331 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded95f649_6925_4589_b1fe_90d10d2b266a.slice/crio-de1aed8744bb745d74d66c6c0d544823bb64710b21f6c35b4af2549c892e6177 WatchSource:0}: Error finding container de1aed8744bb745d74d66c6c0d544823bb64710b21f6c35b4af2549c892e6177: Status 404 returned error can't find the container with id de1aed8744bb745d74d66c6c0d544823bb64710b21f6c35b4af2549c892e6177 Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.310775 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f84rf"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.321766 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.321818 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9th4\" (UniqueName: \"kubernetes.io/projected/99f80072-8a72-46de-b04e-2c6015ee4154-kube-api-access-l9th4\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.321849 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-trusted-ca\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.321877 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-serving-cert\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.321899 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-console-config\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.321937 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99f80072-8a72-46de-b04e-2c6015ee4154-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.321990 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-registry-tls\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.322433 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtx2q\" (UniqueName: \"kubernetes.io/projected/f76c5124-cf8d-43a4-a700-d19319e6a329-kube-api-access-dtx2q\") pod \"openshift-config-operator-7777fb866f-dj6l8\" (UID: \"f76c5124-cf8d-43a4-a700-d19319e6a329\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.322524 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-registry-certificates\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.322565 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-service-ca\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.322599 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa59d575-852e-4348-843a-c78681df1a9d-images\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: E0214 13:54:31.324249 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:31.824229737 +0000 UTC m=+143.850219218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.324819 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrjw\" (UniqueName: \"kubernetes.io/projected/6adb3cca-d5b3-4216-868b-73086725a3ed-kube-api-access-njrjw\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.325074 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6da4864c-6af8-4a50-b55f-c98d904808be-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.325174 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-trusted-ca-bundle\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.325209 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/99f80072-8a72-46de-b04e-2c6015ee4154-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.325309 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69dbf559-3da4-4374-9098-028d9390f555-metrics-tls\") pod \"dns-operator-744455d44c-2p4bf\" (UID: \"69dbf559-3da4-4374-9098-028d9390f555\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.337537 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-bound-sa-token\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338024 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-oauth-config\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338047 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa59d575-852e-4348-843a-c78681df1a9d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338071 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxvgj\" (UniqueName: \"kubernetes.io/projected/fa59d575-852e-4348-843a-c78681df1a9d-kube-api-access-cxvgj\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338135 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f76c5124-cf8d-43a4-a700-d19319e6a329-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dj6l8\" (UID: \"f76c5124-cf8d-43a4-a700-d19319e6a329\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338197 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6da4864c-6af8-4a50-b55f-c98d904808be-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338251 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcb8l\" (UniqueName: \"kubernetes.io/projected/94fbc5e9-43f4-4efc-9964-f6dfe96a982b-kube-api-access-bcb8l\") pod \"downloads-7954f5f757-dfv6f\" (UID: \"94fbc5e9-43f4-4efc-9964-f6dfe96a982b\") " pod="openshift-console/downloads-7954f5f757-dfv6f" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338281 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7jqd\" (UniqueName: \"kubernetes.io/projected/69dbf559-3da4-4374-9098-028d9390f555-kube-api-access-h7jqd\") pod \"dns-operator-744455d44c-2p4bf\" (UID: \"69dbf559-3da4-4374-9098-028d9390f555\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338310 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99f80072-8a72-46de-b04e-2c6015ee4154-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338377 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trz84\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-kube-api-access-trz84\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338431 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76c5124-cf8d-43a4-a700-d19319e6a329-serving-cert\") pod \"openshift-config-operator-7777fb866f-dj6l8\" (UID: \"f76c5124-cf8d-43a4-a700-d19319e6a329\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338520 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-oauth-serving-cert\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.338552 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa59d575-852e-4348-843a-c78681df1a9d-proxy-tls\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.368943 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-66qtn"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.372352 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk"] Feb 14 13:54:31 crc kubenswrapper[4750]: W0214 13:54:31.411335 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82e1e327_0874_4dfa_848f_991c961edc93.slice/crio-d03ff0589d107c32b71b2c8e4090f7ac23a39fad367239ee60fd5ea87a3d6098 WatchSource:0}: Error finding container d03ff0589d107c32b71b2c8e4090f7ac23a39fad367239ee60fd5ea87a3d6098: Status 404 returned error can't find the container with id d03ff0589d107c32b71b2c8e4090f7ac23a39fad367239ee60fd5ea87a3d6098 Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.424602 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.438653 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.439523 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.439815 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76c5124-cf8d-43a4-a700-d19319e6a329-serving-cert\") pod \"openshift-config-operator-7777fb866f-dj6l8\" (UID: \"f76c5124-cf8d-43a4-a700-d19319e6a329\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.439871 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5j8k\" (UniqueName: \"kubernetes.io/projected/6e2ca7c1-d348-4e05-99cc-2611d8a7200b-kube-api-access-w5j8k\") pod \"olm-operator-6b444d44fb-bwqlb\" (UID: \"6e2ca7c1-d348-4e05-99cc-2611d8a7200b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.439904 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-oauth-serving-cert\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: E0214 13:54:31.440973 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:31.940938793 +0000 UTC m=+143.966928434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.441595 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqh64\" (UniqueName: \"kubernetes.io/projected/d2bc6a85-c626-4e48-a3a3-696026a7aa5b-kube-api-access-kqh64\") pod \"ingress-canary-9nxpw\" (UID: \"d2bc6a85-c626-4e48-a3a3-696026a7aa5b\") " pod="openshift-ingress-canary/ingress-canary-9nxpw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.441640 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-trusted-ca\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.441880 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-serving-cert\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.441928 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbb9g\" (UniqueName: \"kubernetes.io/projected/b1f936b3-23f5-4adb-8211-d0b58664e9e2-kube-api-access-zbb9g\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.441967 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-49gzj\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.442028 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99f80072-8a72-46de-b04e-2c6015ee4154-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.442059 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6fmk\" (UniqueName: \"kubernetes.io/projected/78cf262a-619b-4edd-bffb-55e5d454b23b-kube-api-access-x6fmk\") pod \"collect-profiles-29517945-fwbrb\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.442084 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e5a7961-9acb-485c-bca3-dcf013c839fa-metrics-certs\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.442164 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7pkj\" (UniqueName: \"kubernetes.io/projected/2e5a7961-9acb-485c-bca3-dcf013c839fa-kube-api-access-z7pkj\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.442195 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6dd6d380-1734-4106-9717-f5207e5cec61-metrics-tls\") pod \"dns-default-qhxgj\" (UID: \"6dd6d380-1734-4106-9717-f5207e5cec61\") " pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.442226 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w24mx\" (UniqueName: \"kubernetes.io/projected/2d25e2ce-6a4c-45b3-a2ad-da6c93467163-kube-api-access-w24mx\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc67p\" (UID: \"2d25e2ce-6a4c-45b3-a2ad-da6c93467163\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.442252 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-registration-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.442545 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtx2q\" (UniqueName: \"kubernetes.io/projected/f76c5124-cf8d-43a4-a700-d19319e6a329-kube-api-access-dtx2q\") pod \"openshift-config-operator-7777fb866f-dj6l8\" (UID: \"f76c5124-cf8d-43a4-a700-d19319e6a329\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.443530 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-oauth-serving-cert\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.445310 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa59d575-852e-4348-843a-c78681df1a9d-images\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.445614 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.446377 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-trusted-ca\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.446620 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99f80072-8a72-46de-b04e-2c6015ee4154-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.442572 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa59d575-852e-4348-843a-c78681df1a9d-images\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447306 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd4jf\" (UniqueName: \"kubernetes.io/projected/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-kube-api-access-pd4jf\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447334 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9cf7ae6-b0a2-4f05-94af-31b7e0a65547-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dcmq6\" (UID: \"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447382 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-registry-certificates\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447408 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-service-ca\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447430 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1f936b3-23f5-4adb-8211-d0b58664e9e2-etcd-service-ca\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447473 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njrjw\" (UniqueName: \"kubernetes.io/projected/6adb3cca-d5b3-4216-868b-73086725a3ed-kube-api-access-njrjw\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447493 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cwbt\" (UniqueName: \"kubernetes.io/projected/39156cc1-cba3-4fce-b877-82ee6ac6ce02-kube-api-access-2cwbt\") pod \"marketplace-operator-79b997595-49gzj\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447528 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d25e2ce-6a4c-45b3-a2ad-da6c93467163-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc67p\" (UID: \"2d25e2ce-6a4c-45b3-a2ad-da6c93467163\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447550 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1f936b3-23f5-4adb-8211-d0b58664e9e2-etcd-client\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447634 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9cf7ae6-b0a2-4f05-94af-31b7e0a65547-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dcmq6\" (UID: \"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447656 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/99f80072-8a72-46de-b04e-2c6015ee4154-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447675 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7024eec-9ab4-4573-a92e-634b43e88be3-config\") pod \"service-ca-operator-777779d784-rw8px\" (UID: \"b7024eec-9ab4-4573-a92e-634b43e88be3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447705 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-trusted-ca-bundle\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447742 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2ac50c2-e66e-4771-925f-01766b0b2fd8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tlrzt\" (UID: \"f2ac50c2-e66e-4771-925f-01766b0b2fd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447765 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e7e404e4-7975-4a48-8818-592130041f58-signing-key\") pod \"service-ca-9c57cc56f-v7pb5\" (UID: \"e7e404e4-7975-4a48-8818-592130041f58\") " pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447786 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-webhook-cert\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447805 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e2ca7c1-d348-4e05-99cc-2611d8a7200b-srv-cert\") pod \"olm-operator-6b444d44fb-bwqlb\" (UID: \"6e2ca7c1-d348-4e05-99cc-2611d8a7200b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447826 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a31fba8a-dc4a-47b9-81b1-c5301ce38775-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pcdmn\" (UID: \"a31fba8a-dc4a-47b9-81b1-c5301ce38775\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447850 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78cf262a-619b-4edd-bffb-55e5d454b23b-config-volume\") pod \"collect-profiles-29517945-fwbrb\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447869 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-bound-sa-token\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447892 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa59d575-852e-4348-843a-c78681df1a9d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447943 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbprg\" (UniqueName: \"kubernetes.io/projected/540c1ef7-ff20-4f5f-952d-2c62dc6fe72d-kube-api-access-nbprg\") pod \"package-server-manager-789f6589d5-8svsj\" (UID: \"540c1ef7-ff20-4f5f-952d-2c62dc6fe72d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447974 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4gx\" (UniqueName: \"kubernetes.io/projected/a31fba8a-dc4a-47b9-81b1-c5301ce38775-kube-api-access-8v4gx\") pod \"multus-admission-controller-857f4d67dd-pcdmn\" (UID: \"a31fba8a-dc4a-47b9-81b1-c5301ce38775\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.447995 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-csi-data-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448012 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dd6d380-1734-4106-9717-f5207e5cec61-config-volume\") pod \"dns-default-qhxgj\" (UID: \"6dd6d380-1734-4106-9717-f5207e5cec61\") " pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448041 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-socket-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448069 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-apiservice-cert\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448093 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa59d575-852e-4348-843a-c78681df1a9d-proxy-tls\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448164 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e5a7961-9acb-485c-bca3-dcf013c839fa-service-ca-bundle\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448186 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e2ca7c1-d348-4e05-99cc-2611d8a7200b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bwqlb\" (UID: \"6e2ca7c1-d348-4e05-99cc-2611d8a7200b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448214 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rv7k\" (UniqueName: \"kubernetes.io/projected/6dd6d380-1734-4106-9717-f5207e5cec61-kube-api-access-5rv7k\") pod \"dns-default-qhxgj\" (UID: \"6dd6d380-1734-4106-9717-f5207e5cec61\") " pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448236 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9th4\" (UniqueName: \"kubernetes.io/projected/99f80072-8a72-46de-b04e-2c6015ee4154-kube-api-access-l9th4\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448254 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-49gzj\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448298 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2ac50c2-e66e-4771-925f-01766b0b2fd8-proxy-tls\") pod \"machine-config-controller-84d6567774-tlrzt\" (UID: \"f2ac50c2-e66e-4771-925f-01766b0b2fd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448316 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-console-config\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448350 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d1b8a23-779e-49fe-8e23-d9c4a53117b0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rv4bx\" (UID: \"9d1b8a23-779e-49fe-8e23-d9c4a53117b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448370 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4gkt\" (UniqueName: \"kubernetes.io/projected/3ee53a13-e1e0-4062-960c-554ac2d87327-kube-api-access-b4gkt\") pod \"catalog-operator-68c6474976-9qkbr\" (UID: \"3ee53a13-e1e0-4062-960c-554ac2d87327\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448404 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-registry-tls\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448421 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg659\" (UniqueName: \"kubernetes.io/projected/f2ac50c2-e66e-4771-925f-01766b0b2fd8-kube-api-access-qg659\") pod \"machine-config-controller-84d6567774-tlrzt\" (UID: \"f2ac50c2-e66e-4771-925f-01766b0b2fd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448438 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b1f936b3-23f5-4adb-8211-d0b58664e9e2-etcd-ca\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448456 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d25e2ce-6a4c-45b3-a2ad-da6c93467163-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc67p\" (UID: \"2d25e2ce-6a4c-45b3-a2ad-da6c93467163\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448472 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e7e404e4-7975-4a48-8818-592130041f58-signing-cabundle\") pod \"service-ca-9c57cc56f-v7pb5\" (UID: \"e7e404e4-7975-4a48-8818-592130041f58\") " pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448491 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/571a6788-2357-4b0b-8791-f0efd6b03cb5-certs\") pod \"machine-config-server-2gvbw\" (UID: \"571a6788-2357-4b0b-8791-f0efd6b03cb5\") " pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448523 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/571a6788-2357-4b0b-8791-f0efd6b03cb5-node-bootstrap-token\") pod \"machine-config-server-2gvbw\" (UID: \"571a6788-2357-4b0b-8791-f0efd6b03cb5\") " pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448546 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9cf7ae6-b0a2-4f05-94af-31b7e0a65547-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dcmq6\" (UID: \"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448571 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6da4864c-6af8-4a50-b55f-c98d904808be-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448594 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htnts\" (UniqueName: \"kubernetes.io/projected/eb609f2a-ec21-4191-be9d-99c56059f587-kube-api-access-htnts\") pod \"migrator-59844c95c7-rb9d9\" (UID: \"eb609f2a-ec21-4191-be9d-99c56059f587\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448615 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxs74\" (UniqueName: \"kubernetes.io/projected/9d1b8a23-779e-49fe-8e23-d9c4a53117b0-kube-api-access-sxs74\") pod \"control-plane-machine-set-operator-78cbb6b69f-rv4bx\" (UID: \"9d1b8a23-779e-49fe-8e23-d9c4a53117b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448631 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78cf262a-619b-4edd-bffb-55e5d454b23b-secret-volume\") pod \"collect-profiles-29517945-fwbrb\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448666 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-mountpoint-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448686 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wzhw\" (UniqueName: \"kubernetes.io/projected/7295b75e-297a-4aec-bcd5-b2a048229985-kube-api-access-9wzhw\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448704 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7wjg\" (UniqueName: \"kubernetes.io/projected/571a6788-2357-4b0b-8791-f0efd6b03cb5-kube-api-access-l7wjg\") pod \"machine-config-server-2gvbw\" (UID: \"571a6788-2357-4b0b-8791-f0efd6b03cb5\") " pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448720 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3ee53a13-e1e0-4062-960c-554ac2d87327-srv-cert\") pod \"catalog-operator-68c6474976-9qkbr\" (UID: \"3ee53a13-e1e0-4062-960c-554ac2d87327\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448748 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1f936b3-23f5-4adb-8211-d0b58664e9e2-serving-cert\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448764 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e5a7961-9acb-485c-bca3-dcf013c839fa-stats-auth\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448782 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3ee53a13-e1e0-4062-960c-554ac2d87327-profile-collector-cert\") pod \"catalog-operator-68c6474976-9qkbr\" (UID: \"3ee53a13-e1e0-4062-960c-554ac2d87327\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448807 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4672\" (UniqueName: \"kubernetes.io/projected/b7024eec-9ab4-4573-a92e-634b43e88be3-kube-api-access-c4672\") pod \"service-ca-operator-777779d784-rw8px\" (UID: \"b7024eec-9ab4-4573-a92e-634b43e88be3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448825 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-tmpfs\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448843 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69dbf559-3da4-4374-9098-028d9390f555-metrics-tls\") pod \"dns-operator-744455d44c-2p4bf\" (UID: \"69dbf559-3da4-4374-9098-028d9390f555\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448863 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxvgj\" (UniqueName: \"kubernetes.io/projected/fa59d575-852e-4348-843a-c78681df1a9d-kube-api-access-cxvgj\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448885 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-oauth-config\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448904 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f76c5124-cf8d-43a4-a700-d19319e6a329-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dj6l8\" (UID: \"f76c5124-cf8d-43a4-a700-d19319e6a329\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448922 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/540c1ef7-ff20-4f5f-952d-2c62dc6fe72d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8svsj\" (UID: \"540c1ef7-ff20-4f5f-952d-2c62dc6fe72d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448943 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-plugins-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448959 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2bc6a85-c626-4e48-a3a3-696026a7aa5b-cert\") pod \"ingress-canary-9nxpw\" (UID: \"d2bc6a85-c626-4e48-a3a3-696026a7aa5b\") " pod="openshift-ingress-canary/ingress-canary-9nxpw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448973 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8l56\" (UniqueName: \"kubernetes.io/projected/e7e404e4-7975-4a48-8818-592130041f58-kube-api-access-s8l56\") pod \"service-ca-9c57cc56f-v7pb5\" (UID: \"e7e404e4-7975-4a48-8818-592130041f58\") " pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.448990 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6da4864c-6af8-4a50-b55f-c98d904808be-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.449007 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcb8l\" (UniqueName: \"kubernetes.io/projected/94fbc5e9-43f4-4efc-9964-f6dfe96a982b-kube-api-access-bcb8l\") pod \"downloads-7954f5f757-dfv6f\" (UID: \"94fbc5e9-43f4-4efc-9964-f6dfe96a982b\") " pod="openshift-console/downloads-7954f5f757-dfv6f" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.449023 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7jqd\" (UniqueName: \"kubernetes.io/projected/69dbf559-3da4-4374-9098-028d9390f555-kube-api-access-h7jqd\") pod \"dns-operator-744455d44c-2p4bf\" (UID: \"69dbf559-3da4-4374-9098-028d9390f555\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.449039 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f936b3-23f5-4adb-8211-d0b58664e9e2-config\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.449058 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e5a7961-9acb-485c-bca3-dcf013c839fa-default-certificate\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.449077 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99f80072-8a72-46de-b04e-2c6015ee4154-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.449094 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7024eec-9ab4-4573-a92e-634b43e88be3-serving-cert\") pod \"service-ca-operator-777779d784-rw8px\" (UID: \"b7024eec-9ab4-4573-a92e-634b43e88be3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.449137 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trz84\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-kube-api-access-trz84\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.449245 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-registry-certificates\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.450340 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa59d575-852e-4348-843a-c78681df1a9d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.453356 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-trusted-ca-bundle\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.456781 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-oauth-config\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.457048 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f76c5124-cf8d-43a4-a700-d19319e6a329-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dj6l8\" (UID: \"f76c5124-cf8d-43a4-a700-d19319e6a329\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.457081 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa59d575-852e-4348-843a-c78681df1a9d-proxy-tls\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.457331 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6da4864c-6af8-4a50-b55f-c98d904808be-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.457813 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-console-config\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.457913 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-serving-cert\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.459018 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-service-ca\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.459261 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76c5124-cf8d-43a4-a700-d19319e6a329-serving-cert\") pod \"openshift-config-operator-7777fb866f-dj6l8\" (UID: \"f76c5124-cf8d-43a4-a700-d19319e6a329\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.464161 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/99f80072-8a72-46de-b04e-2c6015ee4154-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.468156 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6da4864c-6af8-4a50-b55f-c98d904808be-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.468531 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-registry-tls\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.468913 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69dbf559-3da4-4374-9098-028d9390f555-metrics-tls\") pod \"dns-operator-744455d44c-2p4bf\" (UID: \"69dbf559-3da4-4374-9098-028d9390f555\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.497869 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtx2q\" (UniqueName: \"kubernetes.io/projected/f76c5124-cf8d-43a4-a700-d19319e6a329-kube-api-access-dtx2q\") pod \"openshift-config-operator-7777fb866f-dj6l8\" (UID: \"f76c5124-cf8d-43a4-a700-d19319e6a329\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.507096 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-bound-sa-token\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.520462 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk"] Feb 14 13:54:31 crc kubenswrapper[4750]: W0214 13:54:31.528500 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4243e3e9_15c2_46a0_a695_5520b5921af0.slice/crio-f4a1e644212105a44cbd5c0c9535b6264c925b19f799daf7f1e723bf98ef27bc WatchSource:0}: Error finding container f4a1e644212105a44cbd5c0c9535b6264c925b19f799daf7f1e723bf98ef27bc: Status 404 returned error can't find the container with id f4a1e644212105a44cbd5c0c9535b6264c925b19f799daf7f1e723bf98ef27bc Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.529867 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxvgj\" (UniqueName: \"kubernetes.io/projected/fa59d575-852e-4348-843a-c78681df1a9d-kube-api-access-cxvgj\") pod \"machine-config-operator-74547568cd-xm5fz\" (UID: \"fa59d575-852e-4348-843a-c78681df1a9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.538429 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.546216 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9th4\" (UniqueName: \"kubernetes.io/projected/99f80072-8a72-46de-b04e-2c6015ee4154-kube-api-access-l9th4\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551100 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rv7k\" (UniqueName: \"kubernetes.io/projected/6dd6d380-1734-4106-9717-f5207e5cec61-kube-api-access-5rv7k\") pod \"dns-default-qhxgj\" (UID: \"6dd6d380-1734-4106-9717-f5207e5cec61\") " pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551156 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2ac50c2-e66e-4771-925f-01766b0b2fd8-proxy-tls\") pod \"machine-config-controller-84d6567774-tlrzt\" (UID: \"f2ac50c2-e66e-4771-925f-01766b0b2fd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551177 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-49gzj\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551198 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d1b8a23-779e-49fe-8e23-d9c4a53117b0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rv4bx\" (UID: \"9d1b8a23-779e-49fe-8e23-d9c4a53117b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551218 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4gkt\" (UniqueName: \"kubernetes.io/projected/3ee53a13-e1e0-4062-960c-554ac2d87327-kube-api-access-b4gkt\") pod \"catalog-operator-68c6474976-9qkbr\" (UID: \"3ee53a13-e1e0-4062-960c-554ac2d87327\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551293 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg659\" (UniqueName: \"kubernetes.io/projected/f2ac50c2-e66e-4771-925f-01766b0b2fd8-kube-api-access-qg659\") pod \"machine-config-controller-84d6567774-tlrzt\" (UID: \"f2ac50c2-e66e-4771-925f-01766b0b2fd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551316 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b1f936b3-23f5-4adb-8211-d0b58664e9e2-etcd-ca\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551335 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d25e2ce-6a4c-45b3-a2ad-da6c93467163-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc67p\" (UID: \"2d25e2ce-6a4c-45b3-a2ad-da6c93467163\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551363 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e7e404e4-7975-4a48-8818-592130041f58-signing-cabundle\") pod \"service-ca-9c57cc56f-v7pb5\" (UID: \"e7e404e4-7975-4a48-8818-592130041f58\") " pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551384 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/571a6788-2357-4b0b-8791-f0efd6b03cb5-certs\") pod \"machine-config-server-2gvbw\" (UID: \"571a6788-2357-4b0b-8791-f0efd6b03cb5\") " pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551413 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/571a6788-2357-4b0b-8791-f0efd6b03cb5-node-bootstrap-token\") pod \"machine-config-server-2gvbw\" (UID: \"571a6788-2357-4b0b-8791-f0efd6b03cb5\") " pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551433 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9cf7ae6-b0a2-4f05-94af-31b7e0a65547-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dcmq6\" (UID: \"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551453 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htnts\" (UniqueName: \"kubernetes.io/projected/eb609f2a-ec21-4191-be9d-99c56059f587-kube-api-access-htnts\") pod \"migrator-59844c95c7-rb9d9\" (UID: \"eb609f2a-ec21-4191-be9d-99c56059f587\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551472 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78cf262a-619b-4edd-bffb-55e5d454b23b-secret-volume\") pod \"collect-profiles-29517945-fwbrb\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551488 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-mountpoint-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551507 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxs74\" (UniqueName: \"kubernetes.io/projected/9d1b8a23-779e-49fe-8e23-d9c4a53117b0-kube-api-access-sxs74\") pod \"control-plane-machine-set-operator-78cbb6b69f-rv4bx\" (UID: \"9d1b8a23-779e-49fe-8e23-d9c4a53117b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551524 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1f936b3-23f5-4adb-8211-d0b58664e9e2-serving-cert\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551540 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e5a7961-9acb-485c-bca3-dcf013c839fa-stats-auth\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551555 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wzhw\" (UniqueName: \"kubernetes.io/projected/7295b75e-297a-4aec-bcd5-b2a048229985-kube-api-access-9wzhw\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551573 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7wjg\" (UniqueName: \"kubernetes.io/projected/571a6788-2357-4b0b-8791-f0efd6b03cb5-kube-api-access-l7wjg\") pod \"machine-config-server-2gvbw\" (UID: \"571a6788-2357-4b0b-8791-f0efd6b03cb5\") " pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551591 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3ee53a13-e1e0-4062-960c-554ac2d87327-srv-cert\") pod \"catalog-operator-68c6474976-9qkbr\" (UID: \"3ee53a13-e1e0-4062-960c-554ac2d87327\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551644 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3ee53a13-e1e0-4062-960c-554ac2d87327-profile-collector-cert\") pod \"catalog-operator-68c6474976-9qkbr\" (UID: \"3ee53a13-e1e0-4062-960c-554ac2d87327\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551663 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-tmpfs\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551680 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4672\" (UniqueName: \"kubernetes.io/projected/b7024eec-9ab4-4573-a92e-634b43e88be3-kube-api-access-c4672\") pod \"service-ca-operator-777779d784-rw8px\" (UID: \"b7024eec-9ab4-4573-a92e-634b43e88be3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551700 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/540c1ef7-ff20-4f5f-952d-2c62dc6fe72d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8svsj\" (UID: \"540c1ef7-ff20-4f5f-952d-2c62dc6fe72d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551718 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-plugins-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551753 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f936b3-23f5-4adb-8211-d0b58664e9e2-config\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551771 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2bc6a85-c626-4e48-a3a3-696026a7aa5b-cert\") pod \"ingress-canary-9nxpw\" (UID: \"d2bc6a85-c626-4e48-a3a3-696026a7aa5b\") " pod="openshift-ingress-canary/ingress-canary-9nxpw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551788 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8l56\" (UniqueName: \"kubernetes.io/projected/e7e404e4-7975-4a48-8818-592130041f58-kube-api-access-s8l56\") pod \"service-ca-9c57cc56f-v7pb5\" (UID: \"e7e404e4-7975-4a48-8818-592130041f58\") " pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551812 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7024eec-9ab4-4573-a92e-634b43e88be3-serving-cert\") pod \"service-ca-operator-777779d784-rw8px\" (UID: \"b7024eec-9ab4-4573-a92e-634b43e88be3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551827 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e5a7961-9acb-485c-bca3-dcf013c839fa-default-certificate\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551855 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5j8k\" (UniqueName: \"kubernetes.io/projected/6e2ca7c1-d348-4e05-99cc-2611d8a7200b-kube-api-access-w5j8k\") pod \"olm-operator-6b444d44fb-bwqlb\" (UID: \"6e2ca7c1-d348-4e05-99cc-2611d8a7200b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551874 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551934 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqh64\" (UniqueName: \"kubernetes.io/projected/d2bc6a85-c626-4e48-a3a3-696026a7aa5b-kube-api-access-kqh64\") pod \"ingress-canary-9nxpw\" (UID: \"d2bc6a85-c626-4e48-a3a3-696026a7aa5b\") " pod="openshift-ingress-canary/ingress-canary-9nxpw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551955 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbb9g\" (UniqueName: \"kubernetes.io/projected/b1f936b3-23f5-4adb-8211-d0b58664e9e2-kube-api-access-zbb9g\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551984 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-49gzj\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.551985 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-mountpoint-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552002 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6fmk\" (UniqueName: \"kubernetes.io/projected/78cf262a-619b-4edd-bffb-55e5d454b23b-kube-api-access-x6fmk\") pod \"collect-profiles-29517945-fwbrb\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552018 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e5a7961-9acb-485c-bca3-dcf013c839fa-metrics-certs\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552037 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6dd6d380-1734-4106-9717-f5207e5cec61-metrics-tls\") pod \"dns-default-qhxgj\" (UID: \"6dd6d380-1734-4106-9717-f5207e5cec61\") " pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552052 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7pkj\" (UniqueName: \"kubernetes.io/projected/2e5a7961-9acb-485c-bca3-dcf013c839fa-kube-api-access-z7pkj\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552070 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w24mx\" (UniqueName: \"kubernetes.io/projected/2d25e2ce-6a4c-45b3-a2ad-da6c93467163-kube-api-access-w24mx\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc67p\" (UID: \"2d25e2ce-6a4c-45b3-a2ad-da6c93467163\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552085 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-registration-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552127 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd4jf\" (UniqueName: \"kubernetes.io/projected/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-kube-api-access-pd4jf\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552147 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9cf7ae6-b0a2-4f05-94af-31b7e0a65547-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dcmq6\" (UID: \"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552181 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1f936b3-23f5-4adb-8211-d0b58664e9e2-etcd-service-ca\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552203 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cwbt\" (UniqueName: \"kubernetes.io/projected/39156cc1-cba3-4fce-b877-82ee6ac6ce02-kube-api-access-2cwbt\") pod \"marketplace-operator-79b997595-49gzj\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552220 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d25e2ce-6a4c-45b3-a2ad-da6c93467163-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc67p\" (UID: \"2d25e2ce-6a4c-45b3-a2ad-da6c93467163\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552236 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1f936b3-23f5-4adb-8211-d0b58664e9e2-etcd-client\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552258 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9cf7ae6-b0a2-4f05-94af-31b7e0a65547-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dcmq6\" (UID: \"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552275 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7024eec-9ab4-4573-a92e-634b43e88be3-config\") pod \"service-ca-operator-777779d784-rw8px\" (UID: \"b7024eec-9ab4-4573-a92e-634b43e88be3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552293 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2ac50c2-e66e-4771-925f-01766b0b2fd8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tlrzt\" (UID: \"f2ac50c2-e66e-4771-925f-01766b0b2fd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552310 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e7e404e4-7975-4a48-8818-592130041f58-signing-key\") pod \"service-ca-9c57cc56f-v7pb5\" (UID: \"e7e404e4-7975-4a48-8818-592130041f58\") " pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552325 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-webhook-cert\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552341 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e2ca7c1-d348-4e05-99cc-2611d8a7200b-srv-cert\") pod \"olm-operator-6b444d44fb-bwqlb\" (UID: \"6e2ca7c1-d348-4e05-99cc-2611d8a7200b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552358 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a31fba8a-dc4a-47b9-81b1-c5301ce38775-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pcdmn\" (UID: \"a31fba8a-dc4a-47b9-81b1-c5301ce38775\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552377 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78cf262a-619b-4edd-bffb-55e5d454b23b-config-volume\") pod \"collect-profiles-29517945-fwbrb\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552406 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbprg\" (UniqueName: \"kubernetes.io/projected/540c1ef7-ff20-4f5f-952d-2c62dc6fe72d-kube-api-access-nbprg\") pod \"package-server-manager-789f6589d5-8svsj\" (UID: \"540c1ef7-ff20-4f5f-952d-2c62dc6fe72d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552424 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v4gx\" (UniqueName: \"kubernetes.io/projected/a31fba8a-dc4a-47b9-81b1-c5301ce38775-kube-api-access-8v4gx\") pod \"multus-admission-controller-857f4d67dd-pcdmn\" (UID: \"a31fba8a-dc4a-47b9-81b1-c5301ce38775\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552438 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-csi-data-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552453 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dd6d380-1734-4106-9717-f5207e5cec61-config-volume\") pod \"dns-default-qhxgj\" (UID: \"6dd6d380-1734-4106-9717-f5207e5cec61\") " pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552472 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-socket-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552488 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-apiservice-cert\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552504 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e5a7961-9acb-485c-bca3-dcf013c839fa-service-ca-bundle\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.552839 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e2ca7c1-d348-4e05-99cc-2611d8a7200b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bwqlb\" (UID: \"6e2ca7c1-d348-4e05-99cc-2611d8a7200b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.553223 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d25e2ce-6a4c-45b3-a2ad-da6c93467163-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc67p\" (UID: \"2d25e2ce-6a4c-45b3-a2ad-da6c93467163\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.554591 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b1f936b3-23f5-4adb-8211-d0b58664e9e2-etcd-ca\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.555133 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78cf262a-619b-4edd-bffb-55e5d454b23b-secret-volume\") pod \"collect-profiles-29517945-fwbrb\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.555322 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-registration-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.555629 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9cf7ae6-b0a2-4f05-94af-31b7e0a65547-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dcmq6\" (UID: \"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.556234 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1f936b3-23f5-4adb-8211-d0b58664e9e2-etcd-service-ca\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.557481 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7024eec-9ab4-4573-a92e-634b43e88be3-config\") pod \"service-ca-operator-777779d784-rw8px\" (UID: \"b7024eec-9ab4-4573-a92e-634b43e88be3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.558281 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2ac50c2-e66e-4771-925f-01766b0b2fd8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tlrzt\" (UID: \"f2ac50c2-e66e-4771-925f-01766b0b2fd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.558504 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1f936b3-23f5-4adb-8211-d0b58664e9e2-etcd-client\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.558654 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-csi-data-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.559210 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dd6d380-1734-4106-9717-f5207e5cec61-config-volume\") pod \"dns-default-qhxgj\" (UID: \"6dd6d380-1734-4106-9717-f5207e5cec61\") " pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.559281 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-socket-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.559814 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e7e404e4-7975-4a48-8818-592130041f58-signing-cabundle\") pod \"service-ca-9c57cc56f-v7pb5\" (UID: \"e7e404e4-7975-4a48-8818-592130041f58\") " pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.560652 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d1b8a23-779e-49fe-8e23-d9c4a53117b0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rv4bx\" (UID: \"9d1b8a23-779e-49fe-8e23-d9c4a53117b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.560722 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2ac50c2-e66e-4771-925f-01766b0b2fd8-proxy-tls\") pod \"machine-config-controller-84d6567774-tlrzt\" (UID: \"f2ac50c2-e66e-4771-925f-01766b0b2fd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.561089 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e2ca7c1-d348-4e05-99cc-2611d8a7200b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bwqlb\" (UID: \"6e2ca7c1-d348-4e05-99cc-2611d8a7200b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.562177 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e5a7961-9acb-485c-bca3-dcf013c839fa-stats-auth\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.562373 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/571a6788-2357-4b0b-8791-f0efd6b03cb5-node-bootstrap-token\") pod \"machine-config-server-2gvbw\" (UID: \"571a6788-2357-4b0b-8791-f0efd6b03cb5\") " pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.563275 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1f936b3-23f5-4adb-8211-d0b58664e9e2-serving-cert\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.563839 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e5a7961-9acb-485c-bca3-dcf013c839fa-service-ca-bundle\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.564319 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-tmpfs\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.564534 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7295b75e-297a-4aec-bcd5-b2a048229985-plugins-dir\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.564630 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f936b3-23f5-4adb-8211-d0b58664e9e2-config\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.565517 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mfrd4"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.565564 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78cf262a-619b-4edd-bffb-55e5d454b23b-config-volume\") pod \"collect-profiles-29517945-fwbrb\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:31 crc kubenswrapper[4750]: E0214 13:54:31.565945 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.065927113 +0000 UTC m=+144.091916594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.565959 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-49gzj\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.566042 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d25e2ce-6a4c-45b3-a2ad-da6c93467163-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc67p\" (UID: \"2d25e2ce-6a4c-45b3-a2ad-da6c93467163\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.566439 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-49gzj\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.567777 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7024eec-9ab4-4573-a92e-634b43e88be3-serving-cert\") pod \"service-ca-operator-777779d784-rw8px\" (UID: \"b7024eec-9ab4-4573-a92e-634b43e88be3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.567870 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e5a7961-9acb-485c-bca3-dcf013c839fa-default-certificate\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.567914 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-webhook-cert\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.567921 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/571a6788-2357-4b0b-8791-f0efd6b03cb5-certs\") pod \"machine-config-server-2gvbw\" (UID: \"571a6788-2357-4b0b-8791-f0efd6b03cb5\") " pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.568562 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3ee53a13-e1e0-4062-960c-554ac2d87327-profile-collector-cert\") pod \"catalog-operator-68c6474976-9qkbr\" (UID: \"3ee53a13-e1e0-4062-960c-554ac2d87327\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.569005 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9cf7ae6-b0a2-4f05-94af-31b7e0a65547-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dcmq6\" (UID: \"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.571421 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6dd6d380-1734-4106-9717-f5207e5cec61-metrics-tls\") pod \"dns-default-qhxgj\" (UID: \"6dd6d380-1734-4106-9717-f5207e5cec61\") " pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.571613 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-apiservice-cert\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.571620 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2bc6a85-c626-4e48-a3a3-696026a7aa5b-cert\") pod \"ingress-canary-9nxpw\" (UID: \"d2bc6a85-c626-4e48-a3a3-696026a7aa5b\") " pod="openshift-ingress-canary/ingress-canary-9nxpw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.572361 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcb8l\" (UniqueName: \"kubernetes.io/projected/94fbc5e9-43f4-4efc-9964-f6dfe96a982b-kube-api-access-bcb8l\") pod \"downloads-7954f5f757-dfv6f\" (UID: \"94fbc5e9-43f4-4efc-9964-f6dfe96a982b\") " pod="openshift-console/downloads-7954f5f757-dfv6f" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.572516 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e7e404e4-7975-4a48-8818-592130041f58-signing-key\") pod \"service-ca-9c57cc56f-v7pb5\" (UID: \"e7e404e4-7975-4a48-8818-592130041f58\") " pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.573005 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3ee53a13-e1e0-4062-960c-554ac2d87327-srv-cert\") pod \"catalog-operator-68c6474976-9qkbr\" (UID: \"3ee53a13-e1e0-4062-960c-554ac2d87327\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.573884 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a31fba8a-dc4a-47b9-81b1-c5301ce38775-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pcdmn\" (UID: \"a31fba8a-dc4a-47b9-81b1-c5301ce38775\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.574374 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e5a7961-9acb-485c-bca3-dcf013c839fa-metrics-certs\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.574442 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e2ca7c1-d348-4e05-99cc-2611d8a7200b-srv-cert\") pod \"olm-operator-6b444d44fb-bwqlb\" (UID: \"6e2ca7c1-d348-4e05-99cc-2611d8a7200b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.579736 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/540c1ef7-ff20-4f5f-952d-2c62dc6fe72d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8svsj\" (UID: \"540c1ef7-ff20-4f5f-952d-2c62dc6fe72d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.593296 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trz84\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-kube-api-access-trz84\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.639902 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrjw\" (UniqueName: \"kubernetes.io/projected/6adb3cca-d5b3-4216-868b-73086725a3ed-kube-api-access-njrjw\") pod \"console-f9d7485db-qccxx\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.643078 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7jqd\" (UniqueName: \"kubernetes.io/projected/69dbf559-3da4-4374-9098-028d9390f555-kube-api-access-h7jqd\") pod \"dns-operator-744455d44c-2p4bf\" (UID: \"69dbf559-3da4-4374-9098-028d9390f555\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.657695 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:31 crc kubenswrapper[4750]: E0214 13:54:31.658086 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.158070735 +0000 UTC m=+144.184060216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.681431 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99f80072-8a72-46de-b04e-2c6015ee4154-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wpsj4\" (UID: \"99f80072-8a72-46de-b04e-2c6015ee4154\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.695558 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.706334 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxs74\" (UniqueName: \"kubernetes.io/projected/9d1b8a23-779e-49fe-8e23-d9c4a53117b0-kube-api-access-sxs74\") pod \"control-plane-machine-set-operator-78cbb6b69f-rv4bx\" (UID: \"9d1b8a23-779e-49fe-8e23-d9c4a53117b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.715206 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.717267 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.720276 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7wjg\" (UniqueName: \"kubernetes.io/projected/571a6788-2357-4b0b-8791-f0efd6b03cb5-kube-api-access-l7wjg\") pod \"machine-config-server-2gvbw\" (UID: \"571a6788-2357-4b0b-8791-f0efd6b03cb5\") " pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.732412 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dfv6f" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.733099 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.734064 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.739358 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2gvbw" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.742062 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7pkj\" (UniqueName: \"kubernetes.io/projected/2e5a7961-9acb-485c-bca3-dcf013c839fa-kube-api-access-z7pkj\") pod \"router-default-5444994796-nt44b\" (UID: \"2e5a7961-9acb-485c-bca3-dcf013c839fa\") " pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.743915 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4gkt\" (UniqueName: \"kubernetes.io/projected/3ee53a13-e1e0-4062-960c-554ac2d87327-kube-api-access-b4gkt\") pod \"catalog-operator-68c6474976-9qkbr\" (UID: \"3ee53a13-e1e0-4062-960c-554ac2d87327\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.768005 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.769989 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg659\" (UniqueName: \"kubernetes.io/projected/f2ac50c2-e66e-4771-925f-01766b0b2fd8-kube-api-access-qg659\") pod \"machine-config-controller-84d6567774-tlrzt\" (UID: \"f2ac50c2-e66e-4771-925f-01766b0b2fd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.772594 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: E0214 13:54:31.775054 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.275035351 +0000 UTC m=+144.301024832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.785324 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.786472 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rv7k\" (UniqueName: \"kubernetes.io/projected/6dd6d380-1734-4106-9717-f5207e5cec61-kube-api-access-5rv7k\") pod \"dns-default-qhxgj\" (UID: \"6dd6d380-1734-4106-9717-f5207e5cec61\") " pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.802755 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wzhw\" (UniqueName: \"kubernetes.io/projected/7295b75e-297a-4aec-bcd5-b2a048229985-kube-api-access-9wzhw\") pod \"csi-hostpathplugin-lh5rn\" (UID: \"7295b75e-297a-4aec-bcd5-b2a048229985\") " pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:31 crc kubenswrapper[4750]: W0214 13:54:31.807788 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd3b81a8_83ce_488e_b41d_eed56bc4bc6f.slice/crio-8672f9ce4065a5722a37a6e8abfaf53ef8ae58ec20a5689fb121045f9f673f05 WatchSource:0}: Error finding container 8672f9ce4065a5722a37a6e8abfaf53ef8ae58ec20a5689fb121045f9f673f05: Status 404 returned error can't find the container with id 8672f9ce4065a5722a37a6e8abfaf53ef8ae58ec20a5689fb121045f9f673f05 Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.808600 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" event={"ID":"19eb8e0f-83bc-40d2-a994-ba669171915e","Type":"ContainerStarted","Data":"2321d27e291c6a50289224184c9a15c357c6ca1db8ddeb1bd8d0bb575b983326"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.808633 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" event={"ID":"19eb8e0f-83bc-40d2-a994-ba669171915e","Type":"ContainerStarted","Data":"3b8a2ca8a3c5a309d8e8b671e68f094e468092d98a454d7bb52d0d2af199b60e"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.808645 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" event={"ID":"19eb8e0f-83bc-40d2-a994-ba669171915e","Type":"ContainerStarted","Data":"405e8aab00e47bff9bdf62fc44a1bb42fe7e7cd7532c70b1b15801d269affdc7"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.812207 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mfrd4" event={"ID":"0ea9b0e9-7732-4e69-98a7-208a8626682c","Type":"ContainerStarted","Data":"2d4a81fcce033c1d86dfab1d25271c012e2d9984e49bdc051684427d06594091"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.813731 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" event={"ID":"12522fe4-a583-49f0-baa1-2b6420bfd351","Type":"ContainerStarted","Data":"a012805d6dd3ed26141829cb649d0380e5d691104ac3d0bbba54bfa5a133125d"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.814826 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" event={"ID":"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6","Type":"ContainerStarted","Data":"019cca46ae840f14274e5325f416cdc4fe46b531a60308bd7338f491f6c0529e"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.816223 4750 generic.go:334] "Generic (PLEG): container finished" podID="456beeb0-9309-48a6-9985-a0f57a40c10b" containerID="fd4a9770088b4450a2c32a0a4e6106f9d42e3292a5a0827d7d67ed37280b163c" exitCode=0 Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.816266 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" event={"ID":"456beeb0-9309-48a6-9985-a0f57a40c10b","Type":"ContainerDied","Data":"fd4a9770088b4450a2c32a0a4e6106f9d42e3292a5a0827d7d67ed37280b163c"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.816283 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" event={"ID":"456beeb0-9309-48a6-9985-a0f57a40c10b","Type":"ContainerStarted","Data":"e5dd1c4d69faefba00fa1f3abb894d15bd1fc6167927e4865b6606c551e62b0f"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.818685 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" event={"ID":"6fe262ca-0856-4067-9ce3-13eeaf1f8768","Type":"ContainerStarted","Data":"ac92113f4f36519cda6dafe884b13df88e9eca63ca9171cf104af316d9f95dc4"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.821703 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htnts\" (UniqueName: \"kubernetes.io/projected/eb609f2a-ec21-4191-be9d-99c56059f587-kube-api-access-htnts\") pod \"migrator-59844c95c7-rb9d9\" (UID: \"eb609f2a-ec21-4191-be9d-99c56059f587\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.825130 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" event={"ID":"82e1e327-0874-4dfa-848f-991c961edc93","Type":"ContainerStarted","Data":"0ac4a1ec08e8826a86cee89dffc0d8260c91a4737612ce73d0134c43ed48a76e"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.825185 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" event={"ID":"82e1e327-0874-4dfa-848f-991c961edc93","Type":"ContainerStarted","Data":"d03ff0589d107c32b71b2c8e4090f7ac23a39fad367239ee60fd5ea87a3d6098"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.827399 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" event={"ID":"81c0849a-dd49-44b8-a94d-ad1138ab0246","Type":"ContainerStarted","Data":"bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.827425 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" event={"ID":"81c0849a-dd49-44b8-a94d-ad1138ab0246","Type":"ContainerStarted","Data":"50af079416bb9e1a9d8637d3588723e9757322958cbf855c61b4a2c3a8a590cd"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.828159 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.829802 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" event={"ID":"ed95f649-6925-4589-b1fe-90d10d2b266a","Type":"ContainerStarted","Data":"08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.829822 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" event={"ID":"ed95f649-6925-4589-b1fe-90d10d2b266a","Type":"ContainerStarted","Data":"de1aed8744bb745d74d66c6c0d544823bb64710b21f6c35b4af2549c892e6177"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.830253 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.832963 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" event={"ID":"4243e3e9-15c2-46a0-a695-5520b5921af0","Type":"ContainerStarted","Data":"f4a1e644212105a44cbd5c0c9535b6264c925b19f799daf7f1e723bf98ef27bc"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.834792 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" event={"ID":"78c212db-b819-4b16-80ba-31aa1d95d3e2","Type":"ContainerStarted","Data":"392a1f342097da394d37cb27e5878b5f8c4b0b58106e4d9118d605179da6dd40"} Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.848663 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cwbt\" (UniqueName: \"kubernetes.io/projected/39156cc1-cba3-4fce-b877-82ee6ac6ce02-kube-api-access-2cwbt\") pod \"marketplace-operator-79b997595-49gzj\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.854734 4750 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vwht9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.854757 4750 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ktrwb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.854809 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" podUID="81c0849a-dd49-44b8-a94d-ad1138ab0246" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.854815 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" podUID="ed95f649-6925-4589-b1fe-90d10d2b266a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.865000 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.874140 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:31 crc kubenswrapper[4750]: E0214 13:54:31.874911 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.374876493 +0000 UTC m=+144.400865974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.879016 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:31 crc kubenswrapper[4750]: E0214 13:54:31.879953 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.379935686 +0000 UTC m=+144.405925167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.882851 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.883795 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w24mx\" (UniqueName: \"kubernetes.io/projected/2d25e2ce-6a4c-45b3-a2ad-da6c93467163-kube-api-access-w24mx\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc67p\" (UID: \"2d25e2ce-6a4c-45b3-a2ad-da6c93467163\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.890325 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd4jf\" (UniqueName: \"kubernetes.io/projected/1b3c92ae-8baa-4f32-a45f-beb3bb4647aa-kube-api-access-pd4jf\") pod \"packageserver-d55dfcdfc-v5qh4\" (UID: \"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.907410 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.913377 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.918914 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.936001 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9cf7ae6-b0a2-4f05-94af-31b7e0a65547-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dcmq6\" (UID: \"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.936001 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.951387 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.956071 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc"] Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.957876 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v4gx\" (UniqueName: \"kubernetes.io/projected/a31fba8a-dc4a-47b9-81b1-c5301ce38775-kube-api-access-8v4gx\") pod \"multus-admission-controller-857f4d67dd-pcdmn\" (UID: \"a31fba8a-dc4a-47b9-81b1-c5301ce38775\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.958556 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbb9g\" (UniqueName: \"kubernetes.io/projected/b1f936b3-23f5-4adb-8211-d0b58664e9e2-kube-api-access-zbb9g\") pod \"etcd-operator-b45778765-5ptgn\" (UID: \"b1f936b3-23f5-4adb-8211-d0b58664e9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.979608 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4672\" (UniqueName: \"kubernetes.io/projected/b7024eec-9ab4-4573-a92e-634b43e88be3-kube-api-access-c4672\") pod \"service-ca-operator-777779d784-rw8px\" (UID: \"b7024eec-9ab4-4573-a92e-634b43e88be3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.979984 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:31 crc kubenswrapper[4750]: E0214 13:54:31.980324 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.480310661 +0000 UTC m=+144.506300142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:31 crc kubenswrapper[4750]: I0214 13:54:31.997023 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.000410 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.002805 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8l56\" (UniqueName: \"kubernetes.io/projected/e7e404e4-7975-4a48-8818-592130041f58-kube-api-access-s8l56\") pod \"service-ca-9c57cc56f-v7pb5\" (UID: \"e7e404e4-7975-4a48-8818-592130041f58\") " pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.006366 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbprg\" (UniqueName: \"kubernetes.io/projected/540c1ef7-ff20-4f5f-952d-2c62dc6fe72d-kube-api-access-nbprg\") pod \"package-server-manager-789f6589d5-8svsj\" (UID: \"540c1ef7-ff20-4f5f-952d-2c62dc6fe72d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.016841 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.024884 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.046674 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqh64\" (UniqueName: \"kubernetes.io/projected/d2bc6a85-c626-4e48-a3a3-696026a7aa5b-kube-api-access-kqh64\") pod \"ingress-canary-9nxpw\" (UID: \"d2bc6a85-c626-4e48-a3a3-696026a7aa5b\") " pod="openshift-ingress-canary/ingress-canary-9nxpw" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.060091 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5j8k\" (UniqueName: \"kubernetes.io/projected/6e2ca7c1-d348-4e05-99cc-2611d8a7200b-kube-api-access-w5j8k\") pod \"olm-operator-6b444d44fb-bwqlb\" (UID: \"6e2ca7c1-d348-4e05-99cc-2611d8a7200b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.066925 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6fmk\" (UniqueName: \"kubernetes.io/projected/78cf262a-619b-4edd-bffb-55e5d454b23b-kube-api-access-x6fmk\") pod \"collect-profiles-29517945-fwbrb\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.088733 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.089092 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.589076306 +0000 UTC m=+144.615065787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.101342 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2p4bf"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.145942 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dfv6f"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.147642 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.155600 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.170786 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.171716 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.193279 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.194206 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.69418721 +0000 UTC m=+144.720176691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.226568 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.252412 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.257277 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.269415 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.278022 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.285988 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.295861 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.296303 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.796286711 +0000 UTC m=+144.822276192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: W0214 13:54:32.328642 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf76c5124_cf8d_43a4_a700_d19319e6a329.slice/crio-9d399f77194f8ab730310a086ac6b9d93192caca9fb8e6e86f83633aa96a129b WatchSource:0}: Error finding container 9d399f77194f8ab730310a086ac6b9d93192caca9fb8e6e86f83633aa96a129b: Status 404 returned error can't find the container with id 9d399f77194f8ab730310a086ac6b9d93192caca9fb8e6e86f83633aa96a129b Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.333203 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9nxpw" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.364041 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qccxx"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.396856 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.397019 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.896995781 +0000 UTC m=+144.922985272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.397152 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.397368 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.897359727 +0000 UTC m=+144.923349208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.418328 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.498963 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.499633 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:32.999583744 +0000 UTC m=+145.025573235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: W0214 13:54:32.540265 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6adb3cca_d5b3_4216_868b_73086725a3ed.slice/crio-815e89e2b6b2e7e0a00b768861f2f64551852a16f69f586f44d74396dcfae93c WatchSource:0}: Error finding container 815e89e2b6b2e7e0a00b768861f2f64551852a16f69f586f44d74396dcfae93c: Status 404 returned error can't find the container with id 815e89e2b6b2e7e0a00b768861f2f64551852a16f69f586f44d74396dcfae93c Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.602294 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.602841 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.102824415 +0000 UTC m=+145.128813896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.631023 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49gzj"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.689367 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5ptgn"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.691871 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" podStartSLOduration=122.69185663 podStartE2EDuration="2m2.69185663s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:32.687617053 +0000 UTC m=+144.713606524" watchObservedRunningTime="2026-02-14 13:54:32.69185663 +0000 UTC m=+144.717846111" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.703189 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.703419 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.203396849 +0000 UTC m=+145.229386330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.703608 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.703930 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.203919632 +0000 UTC m=+145.229909113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.726553 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.780883 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lh5rn"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.804338 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.804799 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.304750057 +0000 UTC m=+145.330739538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.805255 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx"] Feb 14 13:54:32 crc kubenswrapper[4750]: W0214 13:54:32.809559 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39156cc1_cba3_4fce_b877_82ee6ac6ce02.slice/crio-b773639f573bb69fa90e24987be9887c863c912c29b42a518103ab98a1f3e1b3 WatchSource:0}: Error finding container b773639f573bb69fa90e24987be9887c863c912c29b42a518103ab98a1f3e1b3: Status 404 returned error can't find the container with id b773639f573bb69fa90e24987be9887c863c912c29b42a518103ab98a1f3e1b3 Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.847303 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mfrd4" event={"ID":"0ea9b0e9-7732-4e69-98a7-208a8626682c","Type":"ContainerStarted","Data":"0f46d65bc4ea5c8d0552d38b0e3158768449662642c493dcbee59e8453f7fc73"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.848255 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.851779 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2gvbw" event={"ID":"571a6788-2357-4b0b-8791-f0efd6b03cb5","Type":"ContainerStarted","Data":"ea4728dfcc2975456cc1ecdfdc53b02ca056863efda85ba881d1a2901ea3bd23"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.852011 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2gvbw" event={"ID":"571a6788-2357-4b0b-8791-f0efd6b03cb5","Type":"ContainerStarted","Data":"9c3bc03fac45975cf80f586b65464d48aa1d61ea4b8bcaf98a24d8ff9d7f9b1f"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.854651 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qccxx" event={"ID":"6adb3cca-d5b3-4216-868b-73086725a3ed","Type":"ContainerStarted","Data":"815e89e2b6b2e7e0a00b768861f2f64551852a16f69f586f44d74396dcfae93c"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.857633 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dfv6f" event={"ID":"94fbc5e9-43f4-4efc-9964-f6dfe96a982b","Type":"ContainerStarted","Data":"86c96f6033f737c648286137e97a9355ec0d058bd53f2d468373f86e1a8ede16"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.859309 4750 patch_prober.go:28] interesting pod/console-operator-58897d9998-mfrd4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.859367 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mfrd4" podUID="0ea9b0e9-7732-4e69-98a7-208a8626682c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.868587 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" event={"ID":"fa59d575-852e-4348-843a-c78681df1a9d","Type":"ContainerStarted","Data":"f002596fd494e66d3de136fca142de92b5b28e3d9bf57b9d41ed160bce937ea5"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.872279 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nt44b" event={"ID":"2e5a7961-9acb-485c-bca3-dcf013c839fa","Type":"ContainerStarted","Data":"38136940f876245ac009fb61c9164d67c3bef6d069793a5a2d2c1a0328b5179e"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.875007 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" event={"ID":"39156cc1-cba3-4fce-b877-82ee6ac6ce02","Type":"ContainerStarted","Data":"b773639f573bb69fa90e24987be9887c863c912c29b42a518103ab98a1f3e1b3"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.878837 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" event={"ID":"12522fe4-a583-49f0-baa1-2b6420bfd351","Type":"ContainerStarted","Data":"1991f90925df457fb001c4980139dcac30190085dfc00c1dceb25196c713c3dd"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.878881 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" event={"ID":"12522fe4-a583-49f0-baa1-2b6420bfd351","Type":"ContainerStarted","Data":"3742798a33f6a27c1266f5c3e72d112922f214241dce6561e2718d8b255cc2b7"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.892391 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" event={"ID":"4243e3e9-15c2-46a0-a695-5520b5921af0","Type":"ContainerStarted","Data":"2ee627c66cecf9b1daa8263151f8874bdeeada28af2ff278136f47b78b94a6b9"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.893355 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" event={"ID":"b1f936b3-23f5-4adb-8211-d0b58664e9e2","Type":"ContainerStarted","Data":"3f6d3f2d71eedea4c246fe408c8148c722f7366c84eb820c8b17f40fc991b5e6"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.894138 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" event={"ID":"37420801-d7c5-4fe1-82e4-557f2c7f68e7","Type":"ContainerStarted","Data":"85e8d4e428be4f472e6ab9cb4abf74a8c14f1169e5d2b8ed895352d902b610e4"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.894993 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" event={"ID":"69dbf559-3da4-4374-9098-028d9390f555","Type":"ContainerStarted","Data":"a534b64208390ef226232228b045fecd376dc2df36647d8b7495463eb53c8c0c"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.895809 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" event={"ID":"f76c5124-cf8d-43a4-a700-d19319e6a329","Type":"ContainerStarted","Data":"9d399f77194f8ab730310a086ac6b9d93192caca9fb8e6e86f83633aa96a129b"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.896835 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" event={"ID":"78c212db-b819-4b16-80ba-31aa1d95d3e2","Type":"ContainerStarted","Data":"073db15b10ead70f8c4b6e8d0389267d2540df4a607d64ac62ece35ce0794cef"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.905685 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:32 crc kubenswrapper[4750]: E0214 13:54:32.908777 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.408761472 +0000 UTC m=+145.434750953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.910463 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" event={"ID":"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6","Type":"ContainerStarted","Data":"9d0539598a001c63699dff622a687f7cbb20a5f152389c4e3b1f243702a19c3e"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.911134 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.914886 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" event={"ID":"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d","Type":"ContainerStarted","Data":"48099079d34e7dd6a71798a01cfd3efd72b512cc0d3f9dc0750e88b8c5726258"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.914937 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" event={"ID":"f115a9d1-2c5b-4d6c-aa8b-e63225e33b2d","Type":"ContainerStarted","Data":"d6a67495ad69ef1493d2f31c6601f56797a5476ae0ac3ab7abe63849ffeff3b9"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.919271 4750 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-f84rf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.919320 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" podUID="d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.928819 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" event={"ID":"3ee53a13-e1e0-4062-960c-554ac2d87327","Type":"ContainerStarted","Data":"b3ff8386ffe2cac974b753ed501715af09eb9bc8e642300502fc97a0510f1732"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.940798 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" event={"ID":"99f80072-8a72-46de-b04e-2c6015ee4154","Type":"ContainerStarted","Data":"db8e39aebe1588f413f9d98ed6ccf3a9b280baf9b099891d8a010d30f8cb49e9"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.945045 4750 generic.go:334] "Generic (PLEG): container finished" podID="6fe262ca-0856-4067-9ce3-13eeaf1f8768" containerID="87f70814800ca0f49ca3180fe7e175939ba47c4ee40032f48e51adc83e9108f8" exitCode=0 Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.945121 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" event={"ID":"6fe262ca-0856-4067-9ce3-13eeaf1f8768","Type":"ContainerDied","Data":"87f70814800ca0f49ca3180fe7e175939ba47c4ee40032f48e51adc83e9108f8"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.946818 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" event={"ID":"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f","Type":"ContainerStarted","Data":"8672f9ce4065a5722a37a6e8abfaf53ef8ae58ec20a5689fb121045f9f673f05"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.952152 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" event={"ID":"b4643208-f942-4075-aa65-178bc272c07d","Type":"ContainerStarted","Data":"518405dd5d240cdb558783beb3297ae3933a394602b7d2f042df8a67a46f305a"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.952203 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" event={"ID":"b4643208-f942-4075-aa65-178bc272c07d","Type":"ContainerStarted","Data":"9c7f07f0e9e758915b111f12c47e648ab61ba2403d30df64f963ca70acbc26ac"} Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.957905 4750 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ktrwb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.957973 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" podUID="ed95f649-6925-4589-b1fe-90d10d2b266a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.961214 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:54:32 crc kubenswrapper[4750]: W0214 13:54:32.961901 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1b8a23_779e_49fe_8e23_d9c4a53117b0.slice/crio-454d32c683541944a40f0f7fa6a72a50898c45abd76e80543ac2e6a1ea639e57 WatchSource:0}: Error finding container 454d32c683541944a40f0f7fa6a72a50898c45abd76e80543ac2e6a1ea639e57: Status 404 returned error can't find the container with id 454d32c683541944a40f0f7fa6a72a50898c45abd76e80543ac2e6a1ea639e57 Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.985277 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v7pb5"] Feb 14 13:54:32 crc kubenswrapper[4750]: I0214 13:54:32.987419 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.007810 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.009365 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.509346027 +0000 UTC m=+145.535335508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.027807 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qhxgj"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.035025 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.110321 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.112615 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.612593998 +0000 UTC m=+145.638583479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.131971 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.154414 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.211229 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.212258 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.712234861 +0000 UTC m=+145.738224352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.264535 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-66qtn" podStartSLOduration=124.264518866 podStartE2EDuration="2m4.264518866s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:33.26256961 +0000 UTC m=+145.288559091" watchObservedRunningTime="2026-02-14 13:54:33.264518866 +0000 UTC m=+145.290508347" Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.319055 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.819037039 +0000 UTC m=+145.845026520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.317908 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.347613 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzvww" podStartSLOduration=123.347594879 podStartE2EDuration="2m3.347594879s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:33.315660401 +0000 UTC m=+145.341649882" watchObservedRunningTime="2026-02-14 13:54:33.347594879 +0000 UTC m=+145.373584360" Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.434559 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.435336 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:33.935318006 +0000 UTC m=+145.961307487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.437479 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" podStartSLOduration=123.437466091 podStartE2EDuration="2m3.437466091s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:33.437467971 +0000 UTC m=+145.463457462" watchObservedRunningTime="2026-02-14 13:54:33.437466091 +0000 UTC m=+145.463455572" Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.473311 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.487782 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.538935 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.539243 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.039231457 +0000 UTC m=+146.065220938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: W0214 13:54:33.559343 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e2ca7c1_d348_4e05_99cc_2611d8a7200b.slice/crio-08c2d3fac77e02079c671fd4c67be4ee2011e4797894fbbfffb5135a6da0aa87 WatchSource:0}: Error finding container 08c2d3fac77e02079c671fd4c67be4ee2011e4797894fbbfffb5135a6da0aa87: Status 404 returned error can't find the container with id 08c2d3fac77e02079c671fd4c67be4ee2011e4797894fbbfffb5135a6da0aa87 Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.570701 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9nxpw"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.640825 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.641619 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.14159776 +0000 UTC m=+146.167587241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.642871 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.675376 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pcdmn"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.683388 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rw8px"] Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.743856 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.744310 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.244294987 +0000 UTC m=+146.270284468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.758464 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" podStartSLOduration=124.758447561 podStartE2EDuration="2m4.758447561s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:33.697461143 +0000 UTC m=+145.723450624" watchObservedRunningTime="2026-02-14 13:54:33.758447561 +0000 UTC m=+145.784437042" Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.759156 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24fsk" podStartSLOduration=124.759149232 podStartE2EDuration="2m4.759149232s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:33.756892903 +0000 UTC m=+145.782882384" watchObservedRunningTime="2026-02-14 13:54:33.759149232 +0000 UTC m=+145.785138713" Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.800389 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk4fk" podStartSLOduration=124.800372359 podStartE2EDuration="2m4.800372359s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:33.797451451 +0000 UTC m=+145.823440922" watchObservedRunningTime="2026-02-14 13:54:33.800372359 +0000 UTC m=+145.826361840" Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.846382 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.846937 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.346921082 +0000 UTC m=+146.372910563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.908924 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2gvbw" podStartSLOduration=4.908907464 podStartE2EDuration="4.908907464s" podCreationTimestamp="2026-02-14 13:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:33.881278196 +0000 UTC m=+145.907267677" watchObservedRunningTime="2026-02-14 13:54:33.908907464 +0000 UTC m=+145.934896945" Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.953353 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:33 crc kubenswrapper[4750]: E0214 13:54:33.953651 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.453640336 +0000 UTC m=+146.479629817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.975097 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mfrd4" podStartSLOduration=123.975080402 podStartE2EDuration="2m3.975080402s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:33.974961416 +0000 UTC m=+146.000950897" watchObservedRunningTime="2026-02-14 13:54:33.975080402 +0000 UTC m=+146.001069883" Feb 14 13:54:33 crc kubenswrapper[4750]: I0214 13:54:33.975617 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lb86" podStartSLOduration=123.975612385 podStartE2EDuration="2m3.975612385s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:33.915472254 +0000 UTC m=+145.941461735" watchObservedRunningTime="2026-02-14 13:54:33.975612385 +0000 UTC m=+146.001601866" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.057556 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.057743 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.557712884 +0000 UTC m=+146.583702375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.057939 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.058300 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.55828616 +0000 UTC m=+146.584275641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.083990 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" event={"ID":"540c1ef7-ff20-4f5f-952d-2c62dc6fe72d","Type":"ContainerStarted","Data":"0cf6eb47ce9636413bf7b7903b233f514f86fa0164b613816793a7bbc7b841fb"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.110801 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" event={"ID":"456beeb0-9309-48a6-9985-a0f57a40c10b","Type":"ContainerStarted","Data":"0912a718c61e615f11bf402916c60dcc84ba5d3048c3071b279db672839d5cb3"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.116521 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" event={"ID":"f2ac50c2-e66e-4771-925f-01766b0b2fd8","Type":"ContainerStarted","Data":"e082eb7523680023649fcede3e92fa81093dcbf266b1e82305d0d058989a761c"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.150041 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" event={"ID":"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547","Type":"ContainerStarted","Data":"45cc2cb5dc3a63f82c4067752ed9109f903307a2cbea699ce4187472e768d0d2"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.159371 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.159852 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.659832656 +0000 UTC m=+146.685822137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.159996 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9" event={"ID":"eb609f2a-ec21-4191-be9d-99c56059f587","Type":"ContainerStarted","Data":"8d7511fffa445173be7343ebdcd4d4cbc04a05bd5333febb008368923104f824"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.162151 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" event={"ID":"7295b75e-297a-4aec-bcd5-b2a048229985","Type":"ContainerStarted","Data":"492eea3ebdaf26eb182a4170ae6b0d79378fdb1f989fb40c5ee1f1657e3e6923"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.171055 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" event={"ID":"3ee53a13-e1e0-4062-960c-554ac2d87327","Type":"ContainerStarted","Data":"32c71fd9d56736fcf4a170d7c96606b8026a61750337a7f6630e462c5fc4a6b3"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.172052 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.177676 4750 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9qkbr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.177716 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" podUID="3ee53a13-e1e0-4062-960c-554ac2d87327" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.182279 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" event={"ID":"e7e404e4-7975-4a48-8818-592130041f58","Type":"ContainerStarted","Data":"dfc7e594b62af5279396fc5b7552d1f3119c10b4919162f95006ebdfd0543f6f"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.233868 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" event={"ID":"69dbf559-3da4-4374-9098-028d9390f555","Type":"ContainerStarted","Data":"aba418ee880c258f7ea2fed735d53c8a98c08a72d95f7af038a38ee8cfadc9da"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.237350 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qccxx" event={"ID":"6adb3cca-d5b3-4216-868b-73086725a3ed","Type":"ContainerStarted","Data":"11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.248883 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" event={"ID":"4243e3e9-15c2-46a0-a695-5520b5921af0","Type":"ContainerStarted","Data":"516d5469f832527d06a84e0a71f364c71e360b6728975736d75f7adb2dc389ab"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.262548 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.263275 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.763245105 +0000 UTC m=+146.789234616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.301545 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" event={"ID":"fa59d575-852e-4348-843a-c78681df1a9d","Type":"ContainerStarted","Data":"0a2619c02c34669340521c90574d67678a6ca4f640c517146c5953ce2085e316"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.312849 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" podStartSLOduration=124.312831401 podStartE2EDuration="2m4.312831401s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.301863948 +0000 UTC m=+146.327853429" watchObservedRunningTime="2026-02-14 13:54:34.312831401 +0000 UTC m=+146.338820882" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.321634 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" event={"ID":"a31fba8a-dc4a-47b9-81b1-c5301ce38775","Type":"ContainerStarted","Data":"2637c9e0e849e20719007d0a69356e1da1a70370c9ef22de15526538d8434a08"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.363498 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.365035 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.865012582 +0000 UTC m=+146.891002063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.410788 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" event={"ID":"99f80072-8a72-46de-b04e-2c6015ee4154","Type":"ContainerStarted","Data":"5de1c01b3480bb4d674b7b5a5507c2f550e2181cb201b492ff96093f4f611a84"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.424052 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x9q7n" podStartSLOduration=125.424037694 podStartE2EDuration="2m5.424037694s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.398340031 +0000 UTC m=+146.424329512" watchObservedRunningTime="2026-02-14 13:54:34.424037694 +0000 UTC m=+146.450027175" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.425460 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qccxx" podStartSLOduration=124.425454887 podStartE2EDuration="2m4.425454887s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.423666018 +0000 UTC m=+146.449655499" watchObservedRunningTime="2026-02-14 13:54:34.425454887 +0000 UTC m=+146.451444368" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.450714 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" event={"ID":"2d25e2ce-6a4c-45b3-a2ad-da6c93467163","Type":"ContainerStarted","Data":"3983467f272476d49057f18d037a100eb728553b32d5b249326a057d6625a7be"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.453417 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" event={"ID":"6e2ca7c1-d348-4e05-99cc-2611d8a7200b","Type":"ContainerStarted","Data":"08c2d3fac77e02079c671fd4c67be4ee2011e4797894fbbfffb5135a6da0aa87"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.465394 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.466316 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:34.966305047 +0000 UTC m=+146.992294528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.500211 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" event={"ID":"9d1b8a23-779e-49fe-8e23-d9c4a53117b0","Type":"ContainerStarted","Data":"bff8077326e0dc5ef214b3046f750e38b6427e401cb7e89feed6fdaff884a51a"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.500265 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" event={"ID":"9d1b8a23-779e-49fe-8e23-d9c4a53117b0","Type":"ContainerStarted","Data":"454d32c683541944a40f0f7fa6a72a50898c45abd76e80543ac2e6a1ea639e57"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.509905 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" event={"ID":"78cf262a-619b-4edd-bffb-55e5d454b23b","Type":"ContainerStarted","Data":"98400f4207455f80cedcf6cfaa6c1a24080389dfd62c7ad69fe2e4534394d203"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.528278 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qhxgj" event={"ID":"6dd6d380-1734-4106-9717-f5207e5cec61","Type":"ContainerStarted","Data":"79a05b6e78cfd18d4877de3ac16b67cd3f165291b9b8a374e605fae69bb23cb7"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.537181 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rv4bx" podStartSLOduration=124.537145371 podStartE2EDuration="2m4.537145371s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.535073549 +0000 UTC m=+146.561063030" watchObservedRunningTime="2026-02-14 13:54:34.537145371 +0000 UTC m=+146.563134852" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.538623 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpsj4" podStartSLOduration=124.538617025 podStartE2EDuration="2m4.538617025s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.459138392 +0000 UTC m=+146.485127873" watchObservedRunningTime="2026-02-14 13:54:34.538617025 +0000 UTC m=+146.564606506" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.540620 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9nxpw" event={"ID":"d2bc6a85-c626-4e48-a3a3-696026a7aa5b","Type":"ContainerStarted","Data":"d7ec2a58919f43a85915b6e9d3b5aba6a16b00898cef3f01fdedc614687259cd"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.543768 4750 generic.go:334] "Generic (PLEG): container finished" podID="f76c5124-cf8d-43a4-a700-d19319e6a329" containerID="07c257b33e147d0747cbdcc299980f64e417364f069bea19895b54ceff2a75d3" exitCode=0 Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.543834 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" event={"ID":"f76c5124-cf8d-43a4-a700-d19319e6a329","Type":"ContainerDied","Data":"07c257b33e147d0747cbdcc299980f64e417364f069bea19895b54ceff2a75d3"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.566692 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.567589 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.067561411 +0000 UTC m=+147.093550892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.573638 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" event={"ID":"37420801-d7c5-4fe1-82e4-557f2c7f68e7","Type":"ContainerStarted","Data":"e557ff3f3acada551635037ec66ced5e7d546cbae1cae3839646c8e875ec5ede"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.597361 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" event={"ID":"b7024eec-9ab4-4573-a92e-634b43e88be3","Type":"ContainerStarted","Data":"df350c232eab3b086c347f182fcf2e999738e863611cd827895b25a6aed7c727"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.603835 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dfv6f" event={"ID":"94fbc5e9-43f4-4efc-9964-f6dfe96a982b","Type":"ContainerStarted","Data":"ad64ba8d2363f9c535cb66305f924335a5210d3009ef230bc2e7f43d26edcf50"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.604713 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dfv6f" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.610283 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" event={"ID":"bd3b81a8-83ce-488e-b41d-eed56bc4bc6f","Type":"ContainerStarted","Data":"97bded503c8c9b27eab8bb18e804587b9a041d3a7c7bdfc859baff3f60f916a3"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.624283 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" event={"ID":"39156cc1-cba3-4fce-b877-82ee6ac6ce02","Type":"ContainerStarted","Data":"041a8f7701cc78b4fc3d54bcac8d40d07b76a1c9c4b1c4c97d405e881eb9c818"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.626494 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.634219 4750 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfv6f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.634290 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfv6f" podUID="94fbc5e9-43f4-4efc-9964-f6dfe96a982b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.642937 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dfv6f" podStartSLOduration=124.642913372 podStartE2EDuration="2m4.642913372s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.639525973 +0000 UTC m=+146.665515454" watchObservedRunningTime="2026-02-14 13:54:34.642913372 +0000 UTC m=+146.668902853" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.646685 4750 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-49gzj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.646971 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" podUID="39156cc1-cba3-4fce-b877-82ee6ac6ce02" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.670509 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.672672 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.172656234 +0000 UTC m=+147.198645805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.687805 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" event={"ID":"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa","Type":"ContainerStarted","Data":"7547158a08b2a5700b01fb2ddd279b2e11c8af01a033d081d2dff14d21432bbd"} Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.687867 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.698067 4750 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v5qh4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.698144 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" podUID="1b3c92ae-8baa-4f32-a45f-beb3bb4647aa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.771568 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.772204 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.272175731 +0000 UTC m=+147.298165212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.772278 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.772789 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.272775917 +0000 UTC m=+147.298765398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.805468 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-72jbc" podStartSLOduration=124.805442348 podStartE2EDuration="2m4.805442348s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.694869143 +0000 UTC m=+146.720858644" watchObservedRunningTime="2026-02-14 13:54:34.805442348 +0000 UTC m=+146.831431839" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.805871 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dz45f" podStartSLOduration=124.805864156 podStartE2EDuration="2m4.805864156s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.77827108 +0000 UTC m=+146.804260581" watchObservedRunningTime="2026-02-14 13:54:34.805864156 +0000 UTC m=+146.831853647" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.832860 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.846933 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" podStartSLOduration=124.846910276 podStartE2EDuration="2m4.846910276s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.81162053 +0000 UTC m=+146.837610011" watchObservedRunningTime="2026-02-14 13:54:34.846910276 +0000 UTC m=+146.872899757" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.863264 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" podStartSLOduration=124.863246696 podStartE2EDuration="2m4.863246696s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:34.845703882 +0000 UTC m=+146.871693373" watchObservedRunningTime="2026-02-14 13:54:34.863246696 +0000 UTC m=+146.889236177" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.868184 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mfrd4" Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.873454 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.873780 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.373749819 +0000 UTC m=+147.399739290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:34 crc kubenswrapper[4750]: I0214 13:54:34.976222 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:34 crc kubenswrapper[4750]: E0214 13:54:34.976651 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.476635215 +0000 UTC m=+147.502624696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.023865 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.077782 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.079211 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.579161875 +0000 UTC m=+147.605151356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.190665 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.191153 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.691134011 +0000 UTC m=+147.717123492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.292660 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.292933 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.792900457 +0000 UTC m=+147.818889938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.293366 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.293828 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.793803667 +0000 UTC m=+147.819793148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.398415 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.398869 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:35.898848688 +0000 UTC m=+147.924838169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.499595 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.500101 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.000077131 +0000 UTC m=+148.026066612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.601746 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.601901 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.101876269 +0000 UTC m=+148.127865750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.602357 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.602691 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.102682574 +0000 UTC m=+148.128672055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.703313 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.703719 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.203701088 +0000 UTC m=+148.229690569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.704997 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" event={"ID":"2d25e2ce-6a4c-45b3-a2ad-da6c93467163","Type":"ContainerStarted","Data":"b103e226412ab2a104af5b91f98455746639d53655afcd4b346f6ef5888331b0"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.708187 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" event={"ID":"78cf262a-619b-4edd-bffb-55e5d454b23b","Type":"ContainerStarted","Data":"7c4578f066aec2dd97ad7a4113f70b265c30d2f0b662e953b837dd48e5ee1535"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.718519 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" event={"ID":"f9cf7ae6-b0a2-4f05-94af-31b7e0a65547","Type":"ContainerStarted","Data":"a3a2eb2e051e3233b4923f00dae135ed055a8410da25a572cea346cc1f3e7eac"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.727082 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" event={"ID":"e7e404e4-7975-4a48-8818-592130041f58","Type":"ContainerStarted","Data":"036ed42b18ee14a887ba07dae25ceba0f2e3f751131f2100ffde02942f29bcc9"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.769607 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" event={"ID":"456beeb0-9309-48a6-9985-a0f57a40c10b","Type":"ContainerStarted","Data":"cf7fc2cd3bcfc6001db2b31b09a69903b7f1829f0ad63a5536578ab29feb96e7"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.789745 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" event={"ID":"b1f936b3-23f5-4adb-8211-d0b58664e9e2","Type":"ContainerStarted","Data":"42dbb40f19482689b4e4642bed327b70203e8145609d7651c997dadfda69bf8f"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.803682 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" event={"ID":"6e2ca7c1-d348-4e05-99cc-2611d8a7200b","Type":"ContainerStarted","Data":"79e4f0b9c7270120640ecdf2dee22c68a9a7e553576a60a06e9e71bcc9b12224"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.804545 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.805841 4750 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bwqlb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.805872 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" podUID="6e2ca7c1-d348-4e05-99cc-2611d8a7200b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.806881 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.808168 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.308153343 +0000 UTC m=+148.334142904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.812588 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc67p" podStartSLOduration=125.812572517 podStartE2EDuration="2m5.812572517s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:35.762139554 +0000 UTC m=+147.788129025" watchObservedRunningTime="2026-02-14 13:54:35.812572517 +0000 UTC m=+147.838561998" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.813010 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-v7pb5" podStartSLOduration=125.813004286 podStartE2EDuration="2m5.813004286s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:35.81173338 +0000 UTC m=+147.837722851" watchObservedRunningTime="2026-02-14 13:54:35.813004286 +0000 UTC m=+147.838993767" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.855661 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" event={"ID":"f2ac50c2-e66e-4771-925f-01766b0b2fd8","Type":"ContainerStarted","Data":"d9c802d2519d1d7ff604b620fc2301fc73e6c4bed033d33ab4a072f092dd2579"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.855703 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" event={"ID":"f2ac50c2-e66e-4771-925f-01766b0b2fd8","Type":"ContainerStarted","Data":"2562149bcd5d8d6e8554d455abcac2c7b4f11034e65aa0adb3639d1bf6c6613c"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.862297 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" podStartSLOduration=125.862280849 podStartE2EDuration="2m5.862280849s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:35.860554833 +0000 UTC m=+147.886544314" watchObservedRunningTime="2026-02-14 13:54:35.862280849 +0000 UTC m=+147.888270330" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.889393 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nt44b" event={"ID":"2e5a7961-9acb-485c-bca3-dcf013c839fa","Type":"ContainerStarted","Data":"e55f0f27c14730ceccbaadec5ce0185adb459ae8bf3db03908bb768afd979d3c"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.892699 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qhxgj" event={"ID":"6dd6d380-1734-4106-9717-f5207e5cec61","Type":"ContainerStarted","Data":"323ba6eecfec9c55ee11810563d26103fdb2062b48fe821e906aff29a8b74450"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.893089 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.907409 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:35 crc kubenswrapper[4750]: E0214 13:54:35.908556 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.408538158 +0000 UTC m=+148.434527639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.914815 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dcmq6" podStartSLOduration=125.914800484 podStartE2EDuration="2m5.914800484s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:35.913222495 +0000 UTC m=+147.939211976" watchObservedRunningTime="2026-02-14 13:54:35.914800484 +0000 UTC m=+147.940789965" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.919107 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" event={"ID":"540c1ef7-ff20-4f5f-952d-2c62dc6fe72d","Type":"ContainerStarted","Data":"0d40b40bbcfebd89ba808913fe918e651dde839157ab4971ddd22e59d90421da"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.919158 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.919168 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" event={"ID":"540c1ef7-ff20-4f5f-952d-2c62dc6fe72d","Type":"ContainerStarted","Data":"38a983d8396e6b86fa62a2fdb0856c4f65d4fccf40c74ad48fecdaab0b3ebf29"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.930486 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" event={"ID":"b7024eec-9ab4-4573-a92e-634b43e88be3","Type":"ContainerStarted","Data":"efe66fb9e92ded9ee0c9555fd374fb55626b974c174632cb56b509af2e1d6d36"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.938931 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.939103 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.939148 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.939359 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.939404 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.943781 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" event={"ID":"a31fba8a-dc4a-47b9-81b1-c5301ce38775","Type":"ContainerStarted","Data":"4c1677818444375e25069cd37e56d053f3b3897bed90a471ecd01e1ec37e59cb"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.945564 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" event={"ID":"69dbf559-3da4-4374-9098-028d9390f555","Type":"ContainerStarted","Data":"a7477e4f99b635bc136dd0461a878c55f9e9c7f21112cc6875585ae840d5be0f"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.948639 4750 patch_prober.go:28] interesting pod/apiserver-76f77b778f-sbvl5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.948701 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" podUID="456beeb0-9309-48a6-9985-a0f57a40c10b" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.958356 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" event={"ID":"f76c5124-cf8d-43a4-a700-d19319e6a329","Type":"ContainerStarted","Data":"d1ea155e90577e7b1735add8b47eaeea1807acf18ba9d095ad842cf933e7f1ac"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.958885 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.964433 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" event={"ID":"1b3c92ae-8baa-4f32-a45f-beb3bb4647aa","Type":"ContainerStarted","Data":"10f67d255b2955fc7e743f35d4713e16e8e4fb63cba10198fb5ac082d1bee820"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.965239 4750 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v5qh4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.965272 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" podUID="1b3c92ae-8baa-4f32-a45f-beb3bb4647aa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.966456 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" podStartSLOduration=126.966440471 podStartE2EDuration="2m6.966440471s" podCreationTimestamp="2026-02-14 13:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:35.965400995 +0000 UTC m=+147.991390476" watchObservedRunningTime="2026-02-14 13:54:35.966440471 +0000 UTC m=+147.992429952" Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.967318 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" event={"ID":"fa59d575-852e-4348-843a-c78681df1a9d","Type":"ContainerStarted","Data":"b26511f849723b06c68fa6cef02e5a00da17b8a466de00824c38581e2f305043"} Feb 14 13:54:35 crc kubenswrapper[4750]: I0214 13:54:35.986131 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" event={"ID":"6fe262ca-0856-4067-9ce3-13eeaf1f8768","Type":"ContainerStarted","Data":"3c3bc56c8ccfe46920d45292cf583ecc80908137f24dc9f5426d17829c1ea129"} Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.004085 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" event={"ID":"b4643208-f942-4075-aa65-178bc272c07d","Type":"ContainerStarted","Data":"ae2eef94f4b6732592e049ed9ca639fa642d1087bfa7edae1d5aec6f3145d277"} Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.009567 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.009829 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.509815933 +0000 UTC m=+148.535805414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.022768 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9" event={"ID":"eb609f2a-ec21-4191-be9d-99c56059f587","Type":"ContainerStarted","Data":"246f72769ddb5c518410dc089ed6a8795814f9fe910e5a0a6ee4d78fac12524f"} Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.022816 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9" event={"ID":"eb609f2a-ec21-4191-be9d-99c56059f587","Type":"ContainerStarted","Data":"36cc4fa59eff70b57af6d72cfa111c436e26c6a2bd07f05dc1b1ea116921980d"} Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.038633 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9nxpw" event={"ID":"d2bc6a85-c626-4e48-a3a3-696026a7aa5b","Type":"ContainerStarted","Data":"a2ec85ed0fa6c6088f494aa234c21f5858449134a6b3e926b2449f7ffcb82092"} Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.039884 4750 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-49gzj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.039918 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" podUID="39156cc1-cba3-4fce-b877-82ee6ac6ce02" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.050704 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qhxgj" podStartSLOduration=8.050681535 podStartE2EDuration="8.050681535s" podCreationTimestamp="2026-02-14 13:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.004031958 +0000 UTC m=+148.030021439" watchObservedRunningTime="2026-02-14 13:54:36.050681535 +0000 UTC m=+148.076671016" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.051778 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" podStartSLOduration=126.051762632 podStartE2EDuration="2m6.051762632s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.039529973 +0000 UTC m=+148.065519444" watchObservedRunningTime="2026-02-14 13:54:36.051762632 +0000 UTC m=+148.077752123" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.057683 4750 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfv6f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.057745 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfv6f" podUID="94fbc5e9-43f4-4efc-9964-f6dfe96a982b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.084619 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlrzt" podStartSLOduration=126.08459274 podStartE2EDuration="2m6.08459274s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.083791744 +0000 UTC m=+148.109781225" watchObservedRunningTime="2026-02-14 13:54:36.08459274 +0000 UTC m=+148.110582221" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.109892 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.112790 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.612767572 +0000 UTC m=+148.638757053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.124438 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9qkbr" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.134957 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nt44b" podStartSLOduration=126.134929069 podStartE2EDuration="2m6.134929069s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.134234668 +0000 UTC m=+148.160224149" watchObservedRunningTime="2026-02-14 13:54:36.134929069 +0000 UTC m=+148.160918550" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.215400 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.215936 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.715915819 +0000 UTC m=+148.741905290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.249127 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5ptgn" podStartSLOduration=126.249094282 podStartE2EDuration="2m6.249094282s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.18326577 +0000 UTC m=+148.209255241" watchObservedRunningTime="2026-02-14 13:54:36.249094282 +0000 UTC m=+148.275083763" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.251458 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rb9d9" podStartSLOduration=126.251450636 podStartE2EDuration="2m6.251450636s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.248730966 +0000 UTC m=+148.274720447" watchObservedRunningTime="2026-02-14 13:54:36.251450636 +0000 UTC m=+148.277440117" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.282774 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" podStartSLOduration=126.282755926 podStartE2EDuration="2m6.282755926s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.282076886 +0000 UTC m=+148.308066367" watchObservedRunningTime="2026-02-14 13:54:36.282755926 +0000 UTC m=+148.308745397" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.316602 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.316992 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.816961224 +0000 UTC m=+148.842950705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.317327 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.317691 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.817674415 +0000 UTC m=+148.843663896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.323017 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" podStartSLOduration=126.3229951 podStartE2EDuration="2m6.3229951s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.322680576 +0000 UTC m=+148.348670057" watchObservedRunningTime="2026-02-14 13:54:36.3229951 +0000 UTC m=+148.348984581" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.357040 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n96f" podStartSLOduration=126.35701638 podStartE2EDuration="2m6.35701638s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.354276459 +0000 UTC m=+148.380265940" watchObservedRunningTime="2026-02-14 13:54:36.35701638 +0000 UTC m=+148.383005861" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.397972 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9nxpw" podStartSLOduration=8.397952704 podStartE2EDuration="8.397952704s" podCreationTimestamp="2026-02-14 13:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.396184016 +0000 UTC m=+148.422173497" watchObservedRunningTime="2026-02-14 13:54:36.397952704 +0000 UTC m=+148.423942185" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.419024 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.419404 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.919377219 +0000 UTC m=+148.945366700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.419571 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.419901 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:36.919886871 +0000 UTC m=+148.945876352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.425622 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" podStartSLOduration=126.425601643 podStartE2EDuration="2m6.425601643s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.423735411 +0000 UTC m=+148.449724892" watchObservedRunningTime="2026-02-14 13:54:36.425601643 +0000 UTC m=+148.451591124" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.520945 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.521668 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.021628977 +0000 UTC m=+149.047618458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.522451 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8px" podStartSLOduration=126.522437722 podStartE2EDuration="2m6.522437722s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.499049941 +0000 UTC m=+148.525039412" watchObservedRunningTime="2026-02-14 13:54:36.522437722 +0000 UTC m=+148.548427203" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.572320 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2p4bf" podStartSLOduration=126.57229789 podStartE2EDuration="2m6.57229789s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.571551967 +0000 UTC m=+148.597541438" watchObservedRunningTime="2026-02-14 13:54:36.57229789 +0000 UTC m=+148.598287361" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.573525 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xm5fz" podStartSLOduration=126.573518854 podStartE2EDuration="2m6.573518854s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:36.524265103 +0000 UTC m=+148.550254584" watchObservedRunningTime="2026-02-14 13:54:36.573518854 +0000 UTC m=+148.599508335" Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.622640 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.623046 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.123026637 +0000 UTC m=+149.149016118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.724210 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.724437 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.224390345 +0000 UTC m=+149.250379826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.724854 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.725368 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.225358728 +0000 UTC m=+149.251348209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.768003 4750 csr.go:261] certificate signing request csr-68g96 is approved, waiting to be issued Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.781469 4750 csr.go:257] certificate signing request csr-68g96 is issued Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.825921 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.827384 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.327351975 +0000 UTC m=+149.353341466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.927381 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:36 crc kubenswrapper[4750]: E0214 13:54:36.927767 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.427749901 +0000 UTC m=+149.453739382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.939921 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 13:54:36 crc kubenswrapper[4750]: [-]has-synced failed: reason withheld Feb 14 13:54:36 crc kubenswrapper[4750]: [+]process-running ok Feb 14 13:54:36 crc kubenswrapper[4750]: healthz check failed Feb 14 13:54:36 crc kubenswrapper[4750]: I0214 13:54:36.940050 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.028337 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.028721 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.528704591 +0000 UTC m=+149.554694072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.045141 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" event={"ID":"a31fba8a-dc4a-47b9-81b1-c5301ce38775","Type":"ContainerStarted","Data":"73bc12083f1305e819da59429158f03d415ba666be68891657dae1f508e9dfe8"} Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.047834 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qhxgj" event={"ID":"6dd6d380-1734-4106-9717-f5207e5cec61","Type":"ContainerStarted","Data":"e21d513e5b1d8248c5cc01ca5976fb43aba84d621a690f07daf7a3f95bc839e5"} Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.050052 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" event={"ID":"7295b75e-297a-4aec-bcd5-b2a048229985","Type":"ContainerStarted","Data":"81462ed29e5546080aaa24d48ffcdf6777ba0729f2e0faba20c55af0672b9a74"} Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.052087 4750 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bwqlb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.052147 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" podUID="6e2ca7c1-d348-4e05-99cc-2611d8a7200b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.054021 4750 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfv6f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.054053 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfv6f" podUID="94fbc5e9-43f4-4efc-9964-f6dfe96a982b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.131191 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.135575 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.635557272 +0000 UTC m=+149.661546743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.146324 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-pcdmn" podStartSLOduration=127.146301606 podStartE2EDuration="2m7.146301606s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:37.091623075 +0000 UTC m=+149.117612556" watchObservedRunningTime="2026-02-14 13:54:37.146301606 +0000 UTC m=+149.172291087" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.146589 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w8hcx"] Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.147745 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.154562 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.184075 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8hcx"] Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.232583 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.232771 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.732739116 +0000 UTC m=+149.758728607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.233004 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.233086 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5hbs\" (UniqueName: \"kubernetes.io/projected/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-kube-api-access-w5hbs\") pod \"certified-operators-w8hcx\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.233228 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-utilities\") pod \"certified-operators-w8hcx\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.233288 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-catalog-content\") pod \"certified-operators-w8hcx\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.233645 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.733633466 +0000 UTC m=+149.759622957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.322061 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sdd5l"] Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.323285 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.325540 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.334078 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.334301 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-catalog-content\") pod \"community-operators-sdd5l\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.334503 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5hbs\" (UniqueName: \"kubernetes.io/projected/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-kube-api-access-w5hbs\") pod \"certified-operators-w8hcx\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.334915 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-utilities\") pod \"community-operators-sdd5l\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.335062 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-utilities\") pod \"certified-operators-w8hcx\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.335155 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-catalog-content\") pod \"certified-operators-w8hcx\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.335265 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57t5v\" (UniqueName: \"kubernetes.io/projected/f2d2bcf7-e24d-4879-85df-777e045107ad-kube-api-access-57t5v\") pod \"community-operators-sdd5l\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.335534 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-utilities\") pod \"certified-operators-w8hcx\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.335651 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-catalog-content\") pod \"certified-operators-w8hcx\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.335717 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.835667484 +0000 UTC m=+149.861657165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.337788 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdd5l"] Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.399805 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5hbs\" (UniqueName: \"kubernetes.io/projected/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-kube-api-access-w5hbs\") pod \"certified-operators-w8hcx\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.436913 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-catalog-content\") pod \"community-operators-sdd5l\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.436963 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.437004 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-utilities\") pod \"community-operators-sdd5l\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.437053 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57t5v\" (UniqueName: \"kubernetes.io/projected/f2d2bcf7-e24d-4879-85df-777e045107ad-kube-api-access-57t5v\") pod \"community-operators-sdd5l\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.437693 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-catalog-content\") pod \"community-operators-sdd5l\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.437952 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:37.937941693 +0000 UTC m=+149.963931174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.438312 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-utilities\") pod \"community-operators-sdd5l\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.460336 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57t5v\" (UniqueName: \"kubernetes.io/projected/f2d2bcf7-e24d-4879-85df-777e045107ad-kube-api-access-57t5v\") pod \"community-operators-sdd5l\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.470441 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.518809 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-96865"] Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.520399 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.538698 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.538987 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gfgz\" (UniqueName: \"kubernetes.io/projected/5c5e32c6-03aa-44ee-9355-346b7b6c516c-kube-api-access-2gfgz\") pod \"certified-operators-96865\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.539041 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-utilities\") pod \"certified-operators-96865\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.539074 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-catalog-content\") pod \"certified-operators-96865\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.539265 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.039238248 +0000 UTC m=+150.065227729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.544383 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96865"] Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.638952 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.640875 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.640950 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gfgz\" (UniqueName: \"kubernetes.io/projected/5c5e32c6-03aa-44ee-9355-346b7b6c516c-kube-api-access-2gfgz\") pod \"certified-operators-96865\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.640987 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-utilities\") pod \"certified-operators-96865\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.641018 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-catalog-content\") pod \"certified-operators-96865\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.641728 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-catalog-content\") pod \"certified-operators-96865\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.642209 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.142195397 +0000 UTC m=+150.168184878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.645909 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-utilities\") pod \"certified-operators-96865\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.674131 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gfgz\" (UniqueName: \"kubernetes.io/projected/5c5e32c6-03aa-44ee-9355-346b7b6c516c-kube-api-access-2gfgz\") pod \"certified-operators-96865\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.710950 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7qqsx"] Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.712336 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.730734 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7qqsx"] Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.753884 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.754147 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.754200 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.757773 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.758485 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.258468543 +0000 UTC m=+150.284458024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.760485 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.783212 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-14 13:49:36 +0000 UTC, rotation deadline is 2026-11-15 16:16:36.98675923 +0000 UTC Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.783253 4750 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6578h21m59.203508474s for next certificate rotation Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.840931 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96865" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.856713 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-utilities\") pod \"community-operators-7qqsx\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.856821 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjsz8\" (UniqueName: \"kubernetes.io/projected/eda53ecd-2d72-46c1-b809-c5753c174cd9-kube-api-access-cjsz8\") pod \"community-operators-7qqsx\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.856849 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-catalog-content\") pod \"community-operators-7qqsx\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.856898 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.856942 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.857434 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.357417136 +0000 UTC m=+150.383406617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.871430 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.947357 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 13:54:37 crc kubenswrapper[4750]: [-]has-synced failed: reason withheld Feb 14 13:54:37 crc kubenswrapper[4750]: [+]process-running ok Feb 14 13:54:37 crc kubenswrapper[4750]: healthz check failed Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.947413 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.960666 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.961131 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-utilities\") pod \"community-operators-7qqsx\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.961235 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.962154 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.46207855 +0000 UTC m=+150.488068031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.963025 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjsz8\" (UniqueName: \"kubernetes.io/projected/eda53ecd-2d72-46c1-b809-c5753c174cd9-kube-api-access-cjsz8\") pod \"community-operators-7qqsx\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.963088 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-catalog-content\") pod \"community-operators-7qqsx\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.963188 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:37 crc kubenswrapper[4750]: E0214 13:54:37.963623 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.463608287 +0000 UTC m=+150.489597768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.964307 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-catalog-content\") pod \"community-operators-7qqsx\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.964458 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-utilities\") pod \"community-operators-7qqsx\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.967957 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.976524 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 13:54:37 crc kubenswrapper[4750]: I0214 13:54:37.990106 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjsz8\" (UniqueName: \"kubernetes.io/projected/eda53ecd-2d72-46c1-b809-c5753c174cd9-kube-api-access-cjsz8\") pod \"community-operators-7qqsx\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.018337 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdd5l"] Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.057558 4750 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v5qh4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.057628 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" podUID="1b3c92ae-8baa-4f32-a45f-beb3bb4647aa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.064759 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.065180 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.066177 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.566157718 +0000 UTC m=+150.592147199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.117843 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.134348 4750 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-dj6l8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.134433 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" podUID="f76c5124-cf8d-43a4-a700-d19319e6a329" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.167662 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bwqlb" Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.169255 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.172441 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.672425332 +0000 UTC m=+150.698414813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.188668 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8hcx"] Feb 14 13:54:38 crc kubenswrapper[4750]: W0214 13:54:38.217071 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc4d9a25_0fa3_4c70_b5ca_6e7ffa2860f8.slice/crio-2bbfee6f223314407a1113c27043c47dcbfbf0b9045919a4223a19db866901b0 WatchSource:0}: Error finding container 2bbfee6f223314407a1113c27043c47dcbfbf0b9045919a4223a19db866901b0: Status 404 returned error can't find the container with id 2bbfee6f223314407a1113c27043c47dcbfbf0b9045919a4223a19db866901b0 Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.266407 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.270596 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.270962 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.770944165 +0000 UTC m=+150.796933646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.288102 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96865"] Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.373981 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.374399 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.874378425 +0000 UTC m=+150.900367906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.490728 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.491134 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:38.991105061 +0000 UTC m=+151.017094542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.593955 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.594281 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.094268169 +0000 UTC m=+151.120257650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.695630 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.696070 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.196048296 +0000 UTC m=+151.222037777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.696325 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.696752 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.196741207 +0000 UTC m=+151.222730688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: W0214 13:54:38.780398 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeda53ecd_2d72_46c1_b809_c5753c174cd9.slice/crio-ff5f2ae3f5c9ad0c688ef646ad0b87da8a652db40045563894c5ffc02deefb11 WatchSource:0}: Error finding container ff5f2ae3f5c9ad0c688ef646ad0b87da8a652db40045563894c5ffc02deefb11: Status 404 returned error can't find the container with id ff5f2ae3f5c9ad0c688ef646ad0b87da8a652db40045563894c5ffc02deefb11 Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.798388 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.798846 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.298828217 +0000 UTC m=+151.324817698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.817744 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7qqsx"] Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.899774 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:38 crc kubenswrapper[4750]: E0214 13:54:38.900146 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.400127953 +0000 UTC m=+151.426117434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.939489 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 13:54:38 crc kubenswrapper[4750]: [-]has-synced failed: reason withheld Feb 14 13:54:38 crc kubenswrapper[4750]: [+]process-running ok Feb 14 13:54:38 crc kubenswrapper[4750]: healthz check failed Feb 14 13:54:38 crc kubenswrapper[4750]: I0214 13:54:38.939563 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 13:54:38 crc kubenswrapper[4750]: W0214 13:54:38.982268 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-6a2973c07ad873d959ca506862005b4874d44e342d01963a7080a94658966b6b WatchSource:0}: Error finding container 6a2973c07ad873d959ca506862005b4874d44e342d01963a7080a94658966b6b: Status 404 returned error can't find the container with id 6a2973c07ad873d959ca506862005b4874d44e342d01963a7080a94658966b6b Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.002984 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.003141 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.503099373 +0000 UTC m=+151.529088854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.003393 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.003710 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.503697049 +0000 UTC m=+151.529686530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.096778 4750 generic.go:334] "Generic (PLEG): container finished" podID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerID="d52f5571f12c8fa64628da8e4038ee5bbd4d96331c199c685ab5ee8699f6fe14" exitCode=0 Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.096847 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdd5l" event={"ID":"f2d2bcf7-e24d-4879-85df-777e045107ad","Type":"ContainerDied","Data":"d52f5571f12c8fa64628da8e4038ee5bbd4d96331c199c685ab5ee8699f6fe14"} Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.096872 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdd5l" event={"ID":"f2d2bcf7-e24d-4879-85df-777e045107ad","Type":"ContainerStarted","Data":"af9066c3b974bbd28f5f5ab2d0380a30b59dc075216d4546ce5911daf68f6443"} Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.098397 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.099304 4750 generic.go:334] "Generic (PLEG): container finished" podID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerID="84d61a4cd235b64139004e40c75c3f7d43e49b50feada19c544d3c0d85736acf" exitCode=0 Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.099397 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8hcx" event={"ID":"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8","Type":"ContainerDied","Data":"84d61a4cd235b64139004e40c75c3f7d43e49b50feada19c544d3c0d85736acf"} Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.099425 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8hcx" event={"ID":"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8","Type":"ContainerStarted","Data":"2bbfee6f223314407a1113c27043c47dcbfbf0b9045919a4223a19db866901b0"} Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.102283 4750 generic.go:334] "Generic (PLEG): container finished" podID="78cf262a-619b-4edd-bffb-55e5d454b23b" containerID="7c4578f066aec2dd97ad7a4113f70b265c30d2f0b662e953b837dd48e5ee1535" exitCode=0 Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.102485 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" event={"ID":"78cf262a-619b-4edd-bffb-55e5d454b23b","Type":"ContainerDied","Data":"7c4578f066aec2dd97ad7a4113f70b265c30d2f0b662e953b837dd48e5ee1535"} Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.103938 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.104320 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.604304594 +0000 UTC m=+151.630294075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.104904 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qqsx" event={"ID":"eda53ecd-2d72-46c1-b809-c5753c174cd9","Type":"ContainerStarted","Data":"ff5f2ae3f5c9ad0c688ef646ad0b87da8a652db40045563894c5ffc02deefb11"} Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.115132 4750 generic.go:334] "Generic (PLEG): container finished" podID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerID="66d3c9648ad4841ba6c2d6462ad5492cb9704a42a7b5f2f50313df3727c42bcf" exitCode=0 Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.115264 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96865" event={"ID":"5c5e32c6-03aa-44ee-9355-346b7b6c516c","Type":"ContainerDied","Data":"66d3c9648ad4841ba6c2d6462ad5492cb9704a42a7b5f2f50313df3727c42bcf"} Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.115297 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96865" event={"ID":"5c5e32c6-03aa-44ee-9355-346b7b6c516c","Type":"ContainerStarted","Data":"9e9e05b121615e6e118e23aa24d7ae3f8162e0ba2a9cd7a2905ed319d8b495ce"} Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.117609 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6a2973c07ad873d959ca506862005b4874d44e342d01963a7080a94658966b6b"} Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.205958 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.206412 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.706395255 +0000 UTC m=+151.732384736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.310579 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.310781 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.810755096 +0000 UTC m=+151.836744577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.311001 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.311349 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.811340282 +0000 UTC m=+151.837329763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.312891 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9vxzp"] Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.317630 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.323550 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.325374 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vxzp"] Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.411700 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.411824 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.911808031 +0000 UTC m=+151.937797512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.411942 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjjcw\" (UniqueName: \"kubernetes.io/projected/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-kube-api-access-zjjcw\") pod \"redhat-marketplace-9vxzp\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.411980 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-utilities\") pod \"redhat-marketplace-9vxzp\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.412006 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.412137 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-catalog-content\") pod \"redhat-marketplace-9vxzp\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.412274 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:39.912266361 +0000 UTC m=+151.938255832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.503402 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.504689 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.508127 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.513629 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.513725 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.513916 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjjcw\" (UniqueName: \"kubernetes.io/projected/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-kube-api-access-zjjcw\") pod \"redhat-marketplace-9vxzp\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.513962 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-utilities\") pod \"redhat-marketplace-9vxzp\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.514008 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-catalog-content\") pod \"redhat-marketplace-9vxzp\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.514461 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-catalog-content\") pod \"redhat-marketplace-9vxzp\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.514586 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.014560321 +0000 UTC m=+152.040549802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.514840 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-utilities\") pod \"redhat-marketplace-9vxzp\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.514980 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.559714 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjjcw\" (UniqueName: \"kubernetes.io/projected/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-kube-api-access-zjjcw\") pod \"redhat-marketplace-9vxzp\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.615082 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.615162 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abbb4294-78f6-436b-9d0e-59393c2be242-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"abbb4294-78f6-436b-9d0e-59393c2be242\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.615238 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abbb4294-78f6-436b-9d0e-59393c2be242-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"abbb4294-78f6-436b-9d0e-59393c2be242\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.615520 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.115508691 +0000 UTC m=+152.141498172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.704926 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.716155 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.716473 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abbb4294-78f6-436b-9d0e-59393c2be242-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"abbb4294-78f6-436b-9d0e-59393c2be242\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.716557 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abbb4294-78f6-436b-9d0e-59393c2be242-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"abbb4294-78f6-436b-9d0e-59393c2be242\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.716958 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abbb4294-78f6-436b-9d0e-59393c2be242-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"abbb4294-78f6-436b-9d0e-59393c2be242\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.716977 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.216945033 +0000 UTC m=+152.242934514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.721191 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xk92k"] Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.722454 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.741853 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abbb4294-78f6-436b-9d0e-59393c2be242-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"abbb4294-78f6-436b-9d0e-59393c2be242\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.745716 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk92k"] Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.817903 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-utilities\") pod \"redhat-marketplace-xk92k\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.817947 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.817972 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-catalog-content\") pod \"redhat-marketplace-xk92k\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.818087 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqhjk\" (UniqueName: \"kubernetes.io/projected/3f741224-7534-4c4b-be4c-1e91015d0a37-kube-api-access-gqhjk\") pod \"redhat-marketplace-xk92k\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.818251 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.318240159 +0000 UTC m=+152.344229640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.833474 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.867705 4750 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.920874 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.921357 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-utilities\") pod \"redhat-marketplace-xk92k\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.921426 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-catalog-content\") pod \"redhat-marketplace-xk92k\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.921480 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqhjk\" (UniqueName: \"kubernetes.io/projected/3f741224-7534-4c4b-be4c-1e91015d0a37-kube-api-access-gqhjk\") pod \"redhat-marketplace-xk92k\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:39 crc kubenswrapper[4750]: E0214 13:54:39.922550 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.422520426 +0000 UTC m=+152.448509907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.923032 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-utilities\") pod \"redhat-marketplace-xk92k\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.923403 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-catalog-content\") pod \"redhat-marketplace-xk92k\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.942183 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 13:54:39 crc kubenswrapper[4750]: [-]has-synced failed: reason withheld Feb 14 13:54:39 crc kubenswrapper[4750]: [+]process-running ok Feb 14 13:54:39 crc kubenswrapper[4750]: healthz check failed Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.942229 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 13:54:39 crc kubenswrapper[4750]: I0214 13:54:39.947914 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqhjk\" (UniqueName: \"kubernetes.io/projected/3f741224-7534-4c4b-be4c-1e91015d0a37-kube-api-access-gqhjk\") pod \"redhat-marketplace-xk92k\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.011308 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vxzp"] Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.023050 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:40 crc kubenswrapper[4750]: E0214 13:54:40.023381 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.523368872 +0000 UTC m=+152.549358353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.046596 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.105189 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.124855 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:40 crc kubenswrapper[4750]: E0214 13:54:40.125051 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.625021733 +0000 UTC m=+152.651011214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.125146 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:40 crc kubenswrapper[4750]: E0214 13:54:40.125558 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.625542156 +0000 UTC m=+152.651531637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.150419 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"647f6ff7958e89b739fd1f2393a7c6373fd2d8358d8cb539453333c5c3f0b19f"} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.150462 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"66c8a519b38c5014ac2624cd1c70eb99b852acafdd3a5c1f3b6da0a5590d88ab"} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.152094 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.156468 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6e52e1c97edea26c8d3ffb786cb7067f47deb70d057fc0e197402d4a94669c0e"} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.156527 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"004b940f059e75a6b7c6e52333e6edb05785d5ac5b2aaee12a27d2fa3eda06df"} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.171495 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vxzp" event={"ID":"76bd762e-0ab4-4d0a-8718-2fc5a0f23747","Type":"ContainerStarted","Data":"f887654072498dadc74d1d18dd3bce05aa955a2d2b97ad8493db600146d289a6"} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.188957 4750 generic.go:334] "Generic (PLEG): container finished" podID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerID="b3823b2170f61a2e9d3ee12a1646952d0c016413d145c8b00452cc13ec49d70f" exitCode=0 Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.189038 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qqsx" event={"ID":"eda53ecd-2d72-46c1-b809-c5753c174cd9","Type":"ContainerDied","Data":"b3823b2170f61a2e9d3ee12a1646952d0c016413d145c8b00452cc13ec49d70f"} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.195957 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" event={"ID":"7295b75e-297a-4aec-bcd5-b2a048229985","Type":"ContainerStarted","Data":"213737b2b341ab1317b8404312f060630f72a1b27457e38c8c9d0996c426badb"} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.196001 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" event={"ID":"7295b75e-297a-4aec-bcd5-b2a048229985","Type":"ContainerStarted","Data":"854c7a6adbd3fbda562f338e179f89601dd03b28d726b7bf6f91159e952b5bb7"} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.200492 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"46df17adaf578fe7b3d54962c0163f2b4151de7723577ec9e5abc10e35496727"} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.227683 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:40 crc kubenswrapper[4750]: E0214 13:54:40.230799 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.730773565 +0000 UTC m=+152.756763046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.308350 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hbtg9"] Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.309670 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.317263 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.330962 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hbtg9"] Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.337048 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:40 crc kubenswrapper[4750]: E0214 13:54:40.337314 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.837303412 +0000 UTC m=+152.863292893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.337523 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-utilities\") pod \"redhat-operators-hbtg9\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.337578 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktfcs\" (UniqueName: \"kubernetes.io/projected/d241cf01-4aa5-46af-9900-0c6a7880f9f4-kube-api-access-ktfcs\") pod \"redhat-operators-hbtg9\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.337597 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-catalog-content\") pod \"redhat-operators-hbtg9\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.439046 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.439368 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-utilities\") pod \"redhat-operators-hbtg9\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.439442 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktfcs\" (UniqueName: \"kubernetes.io/projected/d241cf01-4aa5-46af-9900-0c6a7880f9f4-kube-api-access-ktfcs\") pod \"redhat-operators-hbtg9\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.439469 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-catalog-content\") pod \"redhat-operators-hbtg9\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: E0214 13:54:40.439571 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:40.939525478 +0000 UTC m=+152.965514969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.439907 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-catalog-content\") pod \"redhat-operators-hbtg9\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.440224 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-utilities\") pod \"redhat-operators-hbtg9\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.463940 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktfcs\" (UniqueName: \"kubernetes.io/projected/d241cf01-4aa5-46af-9900-0c6a7880f9f4-kube-api-access-ktfcs\") pod \"redhat-operators-hbtg9\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.541416 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:40 crc kubenswrapper[4750]: E0214 13:54:40.541857 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 13:54:41.041821828 +0000 UTC m=+153.067811309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2zdhw" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.578398 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.642996 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.643087 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78cf262a-619b-4edd-bffb-55e5d454b23b-config-volume\") pod \"78cf262a-619b-4edd-bffb-55e5d454b23b\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.643200 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78cf262a-619b-4edd-bffb-55e5d454b23b-secret-volume\") pod \"78cf262a-619b-4edd-bffb-55e5d454b23b\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.643242 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6fmk\" (UniqueName: \"kubernetes.io/projected/78cf262a-619b-4edd-bffb-55e5d454b23b-kube-api-access-x6fmk\") pod \"78cf262a-619b-4edd-bffb-55e5d454b23b\" (UID: \"78cf262a-619b-4edd-bffb-55e5d454b23b\") " Feb 14 13:54:40 crc kubenswrapper[4750]: E0214 13:54:40.644175 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 13:54:41.144147639 +0000 UTC m=+153.170137120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.644478 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78cf262a-619b-4edd-bffb-55e5d454b23b-config-volume" (OuterVolumeSpecName: "config-volume") pod "78cf262a-619b-4edd-bffb-55e5d454b23b" (UID: "78cf262a-619b-4edd-bffb-55e5d454b23b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.648494 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78cf262a-619b-4edd-bffb-55e5d454b23b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "78cf262a-619b-4edd-bffb-55e5d454b23b" (UID: "78cf262a-619b-4edd-bffb-55e5d454b23b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.649026 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78cf262a-619b-4edd-bffb-55e5d454b23b-kube-api-access-x6fmk" (OuterVolumeSpecName: "kube-api-access-x6fmk") pod "78cf262a-619b-4edd-bffb-55e5d454b23b" (UID: "78cf262a-619b-4edd-bffb-55e5d454b23b"). InnerVolumeSpecName "kube-api-access-x6fmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.650739 4750 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-14T13:54:39.867735731Z","Handler":null,"Name":""} Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.659816 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.669517 4750 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.669568 4750 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.705317 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dj6l8" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.705912 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7r67"] Feb 14 13:54:40 crc kubenswrapper[4750]: E0214 13:54:40.706384 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cf262a-619b-4edd-bffb-55e5d454b23b" containerName="collect-profiles" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.706407 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cf262a-619b-4edd-bffb-55e5d454b23b" containerName="collect-profiles" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.706528 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="78cf262a-619b-4edd-bffb-55e5d454b23b" containerName="collect-profiles" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.707286 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.713377 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk92k"] Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.720545 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7r67"] Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.745612 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-catalog-content\") pod \"redhat-operators-l7r67\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.745685 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.745718 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-utilities\") pod \"redhat-operators-l7r67\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.745782 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4m67\" (UniqueName: \"kubernetes.io/projected/d67ac662-6ad8-42d6-be21-e600814a6074-kube-api-access-k4m67\") pod \"redhat-operators-l7r67\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.745822 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78cf262a-619b-4edd-bffb-55e5d454b23b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.745837 4750 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78cf262a-619b-4edd-bffb-55e5d454b23b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.745849 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6fmk\" (UniqueName: \"kubernetes.io/projected/78cf262a-619b-4edd-bffb-55e5d454b23b-kube-api-access-x6fmk\") on node \"crc\" DevicePath \"\"" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.750622 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.750665 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.800931 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2zdhw\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.830410 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.847097 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.847279 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-catalog-content\") pod \"redhat-operators-l7r67\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.847341 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-utilities\") pod \"redhat-operators-l7r67\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.847413 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4m67\" (UniqueName: \"kubernetes.io/projected/d67ac662-6ad8-42d6-be21-e600814a6074-kube-api-access-k4m67\") pod \"redhat-operators-l7r67\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.848540 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-catalog-content\") pod \"redhat-operators-l7r67\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.848663 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-utilities\") pod \"redhat-operators-l7r67\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.863176 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.868619 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4m67\" (UniqueName: \"kubernetes.io/projected/d67ac662-6ad8-42d6-be21-e600814a6074-kube-api-access-k4m67\") pod \"redhat-operators-l7r67\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.941280 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 13:54:40 crc kubenswrapper[4750]: [-]has-synced failed: reason withheld Feb 14 13:54:40 crc kubenswrapper[4750]: [+]process-running ok Feb 14 13:54:40 crc kubenswrapper[4750]: healthz check failed Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.941652 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.948013 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.963281 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-sbvl5" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.986900 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.987051 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:40 crc kubenswrapper[4750]: I0214 13:54:40.998803 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.037821 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.246356 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk92k" event={"ID":"3f741224-7534-4c4b-be4c-1e91015d0a37","Type":"ContainerStarted","Data":"021c2165dc884d0a9e31a73f2e5c58c149c17b4d2687e642ab196e28cf49565c"} Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.267630 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" event={"ID":"78cf262a-619b-4edd-bffb-55e5d454b23b","Type":"ContainerDied","Data":"98400f4207455f80cedcf6cfaa6c1a24080389dfd62c7ad69fe2e4534394d203"} Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.267715 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98400f4207455f80cedcf6cfaa6c1a24080389dfd62c7ad69fe2e4534394d203" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.267826 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.269688 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hbtg9"] Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.271136 4750 generic.go:334] "Generic (PLEG): container finished" podID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerID="35629182360f25bd3d42e0aa0e612c57cf24e24b6e621bb951f1d7493ad94cd1" exitCode=0 Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.271206 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vxzp" event={"ID":"76bd762e-0ab4-4d0a-8718-2fc5a0f23747","Type":"ContainerDied","Data":"35629182360f25bd3d42e0aa0e612c57cf24e24b6e621bb951f1d7493ad94cd1"} Feb 14 13:54:41 crc kubenswrapper[4750]: W0214 13:54:41.309377 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd241cf01_4aa5_46af_9900_0c6a7880f9f4.slice/crio-5896f8ec7e461e82fdea4b1666be00ae518ad18e51fba99e7ca51272b7fa1d9b WatchSource:0}: Error finding container 5896f8ec7e461e82fdea4b1666be00ae518ad18e51fba99e7ca51272b7fa1d9b: Status 404 returned error can't find the container with id 5896f8ec7e461e82fdea4b1666be00ae518ad18e51fba99e7ca51272b7fa1d9b Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.313409 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2zdhw"] Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.313804 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" event={"ID":"7295b75e-297a-4aec-bcd5-b2a048229985","Type":"ContainerStarted","Data":"008bc2ae17625273246e1b0713b2d2eeb5acc666279d2dd1e06177e00086efc9"} Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.325909 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"abbb4294-78f6-436b-9d0e-59393c2be242","Type":"ContainerStarted","Data":"cf40bf4ffd7e5bd0ad318d07a890e64272217136e19c2ca2e7b394bcbe84fd3a"} Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.325943 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"abbb4294-78f6-436b-9d0e-59393c2be242","Type":"ContainerStarted","Data":"2ed0cd5ef56d9d249ec850aee075435e36305d4a24d2f1f91fb5bd1ac5dc6e6c"} Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.333740 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c4l2n" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.399153 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-lh5rn" podStartSLOduration=13.399136603 podStartE2EDuration="13.399136603s" podCreationTimestamp="2026-02-14 13:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:41.397380166 +0000 UTC m=+153.423369657" watchObservedRunningTime="2026-02-14 13:54:41.399136603 +0000 UTC m=+153.425126084" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.733792 4750 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfv6f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.734044 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfv6f" podUID="94fbc5e9-43f4-4efc-9964-f6dfe96a982b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.733792 4750 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfv6f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.734376 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dfv6f" podUID="94fbc5e9-43f4-4efc-9964-f6dfe96a982b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.771039 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.771089 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.772461 4750 patch_prober.go:28] interesting pod/console-f9d7485db-qccxx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.772503 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qccxx" podUID="6adb3cca-d5b3-4216-868b-73086725a3ed" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.854746 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7r67"] Feb 14 13:54:41 crc kubenswrapper[4750]: W0214 13:54:41.871371 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd67ac662_6ad8_42d6_be21_e600814a6074.slice/crio-6360a11958c650da6e3674c5a5002a18100008da2341b65ce58d6b318419e7db WatchSource:0}: Error finding container 6360a11958c650da6e3674c5a5002a18100008da2341b65ce58d6b318419e7db: Status 404 returned error can't find the container with id 6360a11958c650da6e3674c5a5002a18100008da2341b65ce58d6b318419e7db Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.936794 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.942240 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 13:54:41 crc kubenswrapper[4750]: [-]has-synced failed: reason withheld Feb 14 13:54:41 crc kubenswrapper[4750]: [+]process-running ok Feb 14 13:54:41 crc kubenswrapper[4750]: healthz check failed Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.942321 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 13:54:41 crc kubenswrapper[4750]: I0214 13:54:41.958037 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.008734 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v5qh4" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.344878 4750 generic.go:334] "Generic (PLEG): container finished" podID="abbb4294-78f6-436b-9d0e-59393c2be242" containerID="cf40bf4ffd7e5bd0ad318d07a890e64272217136e19c2ca2e7b394bcbe84fd3a" exitCode=0 Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.344953 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"abbb4294-78f6-436b-9d0e-59393c2be242","Type":"ContainerDied","Data":"cf40bf4ffd7e5bd0ad318d07a890e64272217136e19c2ca2e7b394bcbe84fd3a"} Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.346798 4750 generic.go:334] "Generic (PLEG): container finished" podID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerID="90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec" exitCode=0 Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.346851 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk92k" event={"ID":"3f741224-7534-4c4b-be4c-1e91015d0a37","Type":"ContainerDied","Data":"90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec"} Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.354347 4750 generic.go:334] "Generic (PLEG): container finished" podID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerID="69e44c13ae03c970cbb431d5a0d90d1d40fa350325405294e126e345b1369262" exitCode=0 Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.355083 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbtg9" event={"ID":"d241cf01-4aa5-46af-9900-0c6a7880f9f4","Type":"ContainerDied","Data":"69e44c13ae03c970cbb431d5a0d90d1d40fa350325405294e126e345b1369262"} Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.355127 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbtg9" event={"ID":"d241cf01-4aa5-46af-9900-0c6a7880f9f4","Type":"ContainerStarted","Data":"5896f8ec7e461e82fdea4b1666be00ae518ad18e51fba99e7ca51272b7fa1d9b"} Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.371586 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" event={"ID":"6da4864c-6af8-4a50-b55f-c98d904808be","Type":"ContainerStarted","Data":"67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f"} Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.371741 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" event={"ID":"6da4864c-6af8-4a50-b55f-c98d904808be","Type":"ContainerStarted","Data":"c68dad95a65f303e8d5c2bf7df97cfe04f111f403c31aee2f4fcae055a1760c3"} Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.371766 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.375915 4750 generic.go:334] "Generic (PLEG): container finished" podID="d67ac662-6ad8-42d6-be21-e600814a6074" containerID="af1bd3bcebc16de68daad2cd18d29988fc8c953c59b4957c245d60709df7f3f9" exitCode=0 Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.375949 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7r67" event={"ID":"d67ac662-6ad8-42d6-be21-e600814a6074","Type":"ContainerDied","Data":"af1bd3bcebc16de68daad2cd18d29988fc8c953c59b4957c245d60709df7f3f9"} Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.375992 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7r67" event={"ID":"d67ac662-6ad8-42d6-be21-e600814a6074","Type":"ContainerStarted","Data":"6360a11958c650da6e3674c5a5002a18100008da2341b65ce58d6b318419e7db"} Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.406271 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" podStartSLOduration=132.406248501 podStartE2EDuration="2m12.406248501s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:54:42.391366225 +0000 UTC m=+154.417355736" watchObservedRunningTime="2026-02-14 13:54:42.406248501 +0000 UTC m=+154.432237982" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.539586 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.540922 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.543691 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.544072 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.546606 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.607391 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.607503 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.709345 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.709793 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.709819 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.746009 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.766740 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.803558 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.811520 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abbb4294-78f6-436b-9d0e-59393c2be242-kubelet-dir\") pod \"abbb4294-78f6-436b-9d0e-59393c2be242\" (UID: \"abbb4294-78f6-436b-9d0e-59393c2be242\") " Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.811649 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abbb4294-78f6-436b-9d0e-59393c2be242-kube-api-access\") pod \"abbb4294-78f6-436b-9d0e-59393c2be242\" (UID: \"abbb4294-78f6-436b-9d0e-59393c2be242\") " Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.811657 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abbb4294-78f6-436b-9d0e-59393c2be242-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "abbb4294-78f6-436b-9d0e-59393c2be242" (UID: "abbb4294-78f6-436b-9d0e-59393c2be242"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.811894 4750 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abbb4294-78f6-436b-9d0e-59393c2be242-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.820286 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abbb4294-78f6-436b-9d0e-59393c2be242-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "abbb4294-78f6-436b-9d0e-59393c2be242" (UID: "abbb4294-78f6-436b-9d0e-59393c2be242"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.870295 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.913216 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abbb4294-78f6-436b-9d0e-59393c2be242-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.941791 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 13:54:42 crc kubenswrapper[4750]: [-]has-synced failed: reason withheld Feb 14 13:54:42 crc kubenswrapper[4750]: [+]process-running ok Feb 14 13:54:42 crc kubenswrapper[4750]: healthz check failed Feb 14 13:54:42 crc kubenswrapper[4750]: I0214 13:54:42.941866 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 13:54:43 crc kubenswrapper[4750]: I0214 13:54:43.305958 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 13:54:43 crc kubenswrapper[4750]: W0214 13:54:43.358552 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbcaf9fdb_9ea8_4a8a_b849_e6d83430d346.slice/crio-6f6919cc8a2fd32217ebfbe6fee00351efffe1c285b375fc85b62c00b5433fd4 WatchSource:0}: Error finding container 6f6919cc8a2fd32217ebfbe6fee00351efffe1c285b375fc85b62c00b5433fd4: Status 404 returned error can't find the container with id 6f6919cc8a2fd32217ebfbe6fee00351efffe1c285b375fc85b62c00b5433fd4 Feb 14 13:54:43 crc kubenswrapper[4750]: I0214 13:54:43.429510 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346","Type":"ContainerStarted","Data":"6f6919cc8a2fd32217ebfbe6fee00351efffe1c285b375fc85b62c00b5433fd4"} Feb 14 13:54:43 crc kubenswrapper[4750]: I0214 13:54:43.436976 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"abbb4294-78f6-436b-9d0e-59393c2be242","Type":"ContainerDied","Data":"2ed0cd5ef56d9d249ec850aee075435e36305d4a24d2f1f91fb5bd1ac5dc6e6c"} Feb 14 13:54:43 crc kubenswrapper[4750]: I0214 13:54:43.437026 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed0cd5ef56d9d249ec850aee075435e36305d4a24d2f1f91fb5bd1ac5dc6e6c" Feb 14 13:54:43 crc kubenswrapper[4750]: I0214 13:54:43.437261 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 13:54:43 crc kubenswrapper[4750]: I0214 13:54:43.948493 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 13:54:43 crc kubenswrapper[4750]: [-]has-synced failed: reason withheld Feb 14 13:54:43 crc kubenswrapper[4750]: [+]process-running ok Feb 14 13:54:43 crc kubenswrapper[4750]: healthz check failed Feb 14 13:54:43 crc kubenswrapper[4750]: I0214 13:54:43.949322 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 13:54:44 crc kubenswrapper[4750]: I0214 13:54:44.030681 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qhxgj" Feb 14 13:54:44 crc kubenswrapper[4750]: I0214 13:54:44.940661 4750 patch_prober.go:28] interesting pod/router-default-5444994796-nt44b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 13:54:44 crc kubenswrapper[4750]: [-]has-synced failed: reason withheld Feb 14 13:54:44 crc kubenswrapper[4750]: [+]process-running ok Feb 14 13:54:44 crc kubenswrapper[4750]: healthz check failed Feb 14 13:54:44 crc kubenswrapper[4750]: I0214 13:54:44.940874 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nt44b" podUID="2e5a7961-9acb-485c-bca3-dcf013c839fa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 13:54:45 crc kubenswrapper[4750]: I0214 13:54:45.457628 4750 generic.go:334] "Generic (PLEG): container finished" podID="bcaf9fdb-9ea8-4a8a-b849-e6d83430d346" containerID="a490c42cd0daff555ba321bbb51db37c1f620860268788ea5972f8747e652aec" exitCode=0 Feb 14 13:54:45 crc kubenswrapper[4750]: I0214 13:54:45.458024 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346","Type":"ContainerDied","Data":"a490c42cd0daff555ba321bbb51db37c1f620860268788ea5972f8747e652aec"} Feb 14 13:54:45 crc kubenswrapper[4750]: I0214 13:54:45.939664 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:45 crc kubenswrapper[4750]: I0214 13:54:45.947251 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nt44b" Feb 14 13:54:47 crc kubenswrapper[4750]: I0214 13:54:47.316576 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 13:54:51 crc kubenswrapper[4750]: I0214 13:54:51.733287 4750 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfv6f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 14 13:54:51 crc kubenswrapper[4750]: I0214 13:54:51.733892 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dfv6f" podUID="94fbc5e9-43f4-4efc-9964-f6dfe96a982b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 14 13:54:51 crc kubenswrapper[4750]: I0214 13:54:51.733358 4750 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfv6f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 14 13:54:51 crc kubenswrapper[4750]: I0214 13:54:51.733990 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfv6f" podUID="94fbc5e9-43f4-4efc-9964-f6dfe96a982b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 14 13:54:51 crc kubenswrapper[4750]: I0214 13:54:51.774908 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:51 crc kubenswrapper[4750]: I0214 13:54:51.779271 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qccxx" Feb 14 13:54:53 crc kubenswrapper[4750]: I0214 13:54:53.049728 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:53 crc kubenswrapper[4750]: I0214 13:54:53.059274 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29305ecd-7a38-4ed0-b02e-b391e5487699-metrics-certs\") pod \"network-metrics-daemon-l6hd4\" (UID: \"29305ecd-7a38-4ed0-b02e-b391e5487699\") " pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:54:53 crc kubenswrapper[4750]: I0214 13:54:53.293386 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l6hd4" Feb 14 13:55:00 crc kubenswrapper[4750]: I0214 13:55:00.131712 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 13:55:00 crc kubenswrapper[4750]: I0214 13:55:00.132966 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 13:55:00 crc kubenswrapper[4750]: I0214 13:55:00.841835 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:55:01 crc kubenswrapper[4750]: I0214 13:55:01.749889 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dfv6f" Feb 14 13:55:11 crc kubenswrapper[4750]: I0214 13:55:11.830173 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.028878 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kube-api-access\") pod \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\" (UID: \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\") " Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.029033 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kubelet-dir\") pod \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\" (UID: \"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346\") " Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.029485 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bcaf9fdb-9ea8-4a8a-b849-e6d83430d346" (UID: "bcaf9fdb-9ea8-4a8a-b849-e6d83430d346"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.038174 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bcaf9fdb-9ea8-4a8a-b849-e6d83430d346" (UID: "bcaf9fdb-9ea8-4a8a-b849-e6d83430d346"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.132501 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.132570 4750 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf9fdb-9ea8-4a8a-b849-e6d83430d346-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.178559 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8svsj" Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.777428 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bcaf9fdb-9ea8-4a8a-b849-e6d83430d346","Type":"ContainerDied","Data":"6f6919cc8a2fd32217ebfbe6fee00351efffe1c285b375fc85b62c00b5433fd4"} Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.777540 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f6919cc8a2fd32217ebfbe6fee00351efffe1c285b375fc85b62c00b5433fd4" Feb 14 13:55:12 crc kubenswrapper[4750]: I0214 13:55:12.777640 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 13:55:18 crc kubenswrapper[4750]: I0214 13:55:18.345783 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.133397 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 13:55:19 crc kubenswrapper[4750]: E0214 13:55:19.133961 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbb4294-78f6-436b-9d0e-59393c2be242" containerName="pruner" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.134040 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbb4294-78f6-436b-9d0e-59393c2be242" containerName="pruner" Feb 14 13:55:19 crc kubenswrapper[4750]: E0214 13:55:19.134107 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcaf9fdb-9ea8-4a8a-b849-e6d83430d346" containerName="pruner" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.134186 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcaf9fdb-9ea8-4a8a-b849-e6d83430d346" containerName="pruner" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.134334 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcaf9fdb-9ea8-4a8a-b849-e6d83430d346" containerName="pruner" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.134397 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbb4294-78f6-436b-9d0e-59393c2be242" containerName="pruner" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.134817 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.140859 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.141268 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.151007 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.151084 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.153201 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.252394 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.252441 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.252625 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.288435 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:19 crc kubenswrapper[4750]: I0214 13:55:19.453293 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:22 crc kubenswrapper[4750]: E0214 13:55:22.274960 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 13:55:22 crc kubenswrapper[4750]: E0214 13:55:22.275426 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4m67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l7r67_openshift-marketplace(d67ac662-6ad8-42d6-be21-e600814a6074): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 13:55:22 crc kubenswrapper[4750]: E0214 13:55:22.276648 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l7r67" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" Feb 14 13:55:23 crc kubenswrapper[4750]: E0214 13:55:23.162992 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l7r67" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.332932 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.333599 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.346014 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.435424 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-var-lock\") pod \"installer-9-crc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.435487 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.435525 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38fcd686-2510-46e2-9241-bfc27bfe23cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.536617 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-var-lock\") pod \"installer-9-crc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.536697 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.536741 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38fcd686-2510-46e2-9241-bfc27bfe23cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.536743 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-var-lock\") pod \"installer-9-crc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.536884 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.559408 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38fcd686-2510-46e2-9241-bfc27bfe23cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:24 crc kubenswrapper[4750]: E0214 13:55:24.618704 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 14 13:55:24 crc kubenswrapper[4750]: E0214 13:55:24.618870 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjjcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9vxzp_openshift-marketplace(76bd762e-0ab4-4d0a-8718-2fc5a0f23747): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 13:55:24 crc kubenswrapper[4750]: E0214 13:55:24.620439 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9vxzp" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" Feb 14 13:55:24 crc kubenswrapper[4750]: I0214 13:55:24.660370 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:55:26 crc kubenswrapper[4750]: E0214 13:55:26.541833 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9vxzp" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" Feb 14 13:55:26 crc kubenswrapper[4750]: E0214 13:55:26.848319 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 14 13:55:26 crc kubenswrapper[4750]: E0214 13:55:26.848857 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5hbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w8hcx_openshift-marketplace(bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 13:55:26 crc kubenswrapper[4750]: E0214 13:55:26.850122 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-w8hcx" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" Feb 14 13:55:30 crc kubenswrapper[4750]: I0214 13:55:30.129529 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 13:55:30 crc kubenswrapper[4750]: I0214 13:55:30.129615 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 13:55:31 crc kubenswrapper[4750]: E0214 13:55:31.375442 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w8hcx" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" Feb 14 13:55:31 crc kubenswrapper[4750]: I0214 13:55:31.655971 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l6hd4"] Feb 14 13:55:31 crc kubenswrapper[4750]: W0214 13:55:31.671226 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29305ecd_7a38_4ed0_b02e_b391e5487699.slice/crio-5888283b191f6899af701ddfb07ab2fbe9c93f18ad983a9019958099845c76a8 WatchSource:0}: Error finding container 5888283b191f6899af701ddfb07ab2fbe9c93f18ad983a9019958099845c76a8: Status 404 returned error can't find the container with id 5888283b191f6899af701ddfb07ab2fbe9c93f18ad983a9019958099845c76a8 Feb 14 13:55:31 crc kubenswrapper[4750]: E0214 13:55:31.744999 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 13:55:31 crc kubenswrapper[4750]: E0214 13:55:31.745194 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57t5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sdd5l_openshift-marketplace(f2d2bcf7-e24d-4879-85df-777e045107ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 13:55:31 crc kubenswrapper[4750]: E0214 13:55:31.746353 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sdd5l" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" Feb 14 13:55:31 crc kubenswrapper[4750]: I0214 13:55:31.824369 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 13:55:31 crc kubenswrapper[4750]: I0214 13:55:31.825658 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 13:55:31 crc kubenswrapper[4750]: W0214 13:55:31.849891 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod38fcd686_2510_46e2_9241_bfc27bfe23cc.slice/crio-8e1a4d897f6d1853752376e178dff310a721202985f22f8c2968fc90dab75258 WatchSource:0}: Error finding container 8e1a4d897f6d1853752376e178dff310a721202985f22f8c2968fc90dab75258: Status 404 returned error can't find the container with id 8e1a4d897f6d1853752376e178dff310a721202985f22f8c2968fc90dab75258 Feb 14 13:55:31 crc kubenswrapper[4750]: I0214 13:55:31.873634 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"38fcd686-2510-46e2-9241-bfc27bfe23cc","Type":"ContainerStarted","Data":"8e1a4d897f6d1853752376e178dff310a721202985f22f8c2968fc90dab75258"} Feb 14 13:55:31 crc kubenswrapper[4750]: I0214 13:55:31.874546 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d","Type":"ContainerStarted","Data":"c9c85572f2f19b4632711073fc17f43f51e50c19d55c8f140408882e027ae999"} Feb 14 13:55:31 crc kubenswrapper[4750]: I0214 13:55:31.875235 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" event={"ID":"29305ecd-7a38-4ed0-b02e-b391e5487699","Type":"ContainerStarted","Data":"5888283b191f6899af701ddfb07ab2fbe9c93f18ad983a9019958099845c76a8"} Feb 14 13:55:31 crc kubenswrapper[4750]: E0214 13:55:31.913439 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sdd5l" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" Feb 14 13:55:33 crc kubenswrapper[4750]: I0214 13:55:33.892743 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" event={"ID":"29305ecd-7a38-4ed0-b02e-b391e5487699","Type":"ContainerStarted","Data":"0dfa6ee2f3a1fd8cbb20d95cf10939d709447115bf579ad6005e04f3fbd83a26"} Feb 14 13:55:33 crc kubenswrapper[4750]: E0214 13:55:33.971971 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 13:55:33 crc kubenswrapper[4750]: E0214 13:55:33.972240 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjsz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7qqsx_openshift-marketplace(eda53ecd-2d72-46c1-b809-c5753c174cd9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 13:55:33 crc kubenswrapper[4750]: E0214 13:55:33.973552 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7qqsx" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" Feb 14 13:55:34 crc kubenswrapper[4750]: E0214 13:55:34.902336 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7qqsx" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" Feb 14 13:55:35 crc kubenswrapper[4750]: I0214 13:55:35.907516 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d","Type":"ContainerStarted","Data":"33143ac2a286576d2e7689f5315fcac8c1abbbed634b2aa20605f7954332a376"} Feb 14 13:55:35 crc kubenswrapper[4750]: I0214 13:55:35.910638 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"38fcd686-2510-46e2-9241-bfc27bfe23cc","Type":"ContainerStarted","Data":"795a47c241ad27c87f4ac6f24df7a4ea8c1fa12b108bcfd1edcc944aa066cdce"} Feb 14 13:55:35 crc kubenswrapper[4750]: I0214 13:55:35.925389 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=16.925362874 podStartE2EDuration="16.925362874s" podCreationTimestamp="2026-02-14 13:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:55:35.92485946 +0000 UTC m=+207.950848971" watchObservedRunningTime="2026-02-14 13:55:35.925362874 +0000 UTC m=+207.951352395" Feb 14 13:55:35 crc kubenswrapper[4750]: I0214 13:55:35.949450 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.949423488 podStartE2EDuration="11.949423488s" podCreationTimestamp="2026-02-14 13:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:55:35.944170875 +0000 UTC m=+207.970160386" watchObservedRunningTime="2026-02-14 13:55:35.949423488 +0000 UTC m=+207.975413009" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.224315 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.224529 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktfcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hbtg9_openshift-marketplace(d241cf01-4aa5-46af-9900-0c6a7880f9f4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.225757 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hbtg9" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.475649 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.475876 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqhjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xk92k_openshift-marketplace(3f741224-7534-4c4b-be4c-1e91015d0a37): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.477139 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xk92k" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.618126 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.618274 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gfgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-96865_openshift-marketplace(5c5e32c6-03aa-44ee-9355-346b7b6c516c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.619566 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-96865" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" Feb 14 13:55:36 crc kubenswrapper[4750]: I0214 13:55:36.917966 4750 generic.go:334] "Generic (PLEG): container finished" podID="cd859b0d-57f2-4c9a-82a2-8ea16d069f3d" containerID="33143ac2a286576d2e7689f5315fcac8c1abbbed634b2aa20605f7954332a376" exitCode=0 Feb 14 13:55:36 crc kubenswrapper[4750]: I0214 13:55:36.918278 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d","Type":"ContainerDied","Data":"33143ac2a286576d2e7689f5315fcac8c1abbbed634b2aa20605f7954332a376"} Feb 14 13:55:36 crc kubenswrapper[4750]: I0214 13:55:36.921887 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l6hd4" event={"ID":"29305ecd-7a38-4ed0-b02e-b391e5487699","Type":"ContainerStarted","Data":"ed911c4a194b136ea9cc9b18b75c6d3c648892ce569abc669c52f9885180b984"} Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.924635 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hbtg9" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.924635 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-96865" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" Feb 14 13:55:36 crc kubenswrapper[4750]: E0214 13:55:36.924752 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xk92k" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" Feb 14 13:55:36 crc kubenswrapper[4750]: I0214 13:55:36.993766 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-l6hd4" podStartSLOduration=186.993743851 podStartE2EDuration="3m6.993743851s" podCreationTimestamp="2026-02-14 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:55:36.990943425 +0000 UTC m=+209.016932926" watchObservedRunningTime="2026-02-14 13:55:36.993743851 +0000 UTC m=+209.019733352" Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.144175 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.268086 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kube-api-access\") pod \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\" (UID: \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\") " Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.268373 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kubelet-dir\") pod \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\" (UID: \"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d\") " Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.268499 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cd859b0d-57f2-4c9a-82a2-8ea16d069f3d" (UID: "cd859b0d-57f2-4c9a-82a2-8ea16d069f3d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.268856 4750 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.273285 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cd859b0d-57f2-4c9a-82a2-8ea16d069f3d" (UID: "cd859b0d-57f2-4c9a-82a2-8ea16d069f3d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.369885 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd859b0d-57f2-4c9a-82a2-8ea16d069f3d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.932629 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cd859b0d-57f2-4c9a-82a2-8ea16d069f3d","Type":"ContainerDied","Data":"c9c85572f2f19b4632711073fc17f43f51e50c19d55c8f140408882e027ae999"} Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.932695 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 13:55:38 crc kubenswrapper[4750]: I0214 13:55:38.932704 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c85572f2f19b4632711073fc17f43f51e50c19d55c8f140408882e027ae999" Feb 14 13:55:40 crc kubenswrapper[4750]: I0214 13:55:40.946606 4750 generic.go:334] "Generic (PLEG): container finished" podID="d67ac662-6ad8-42d6-be21-e600814a6074" containerID="6a9a6aa4eef063ef7b4c2d3eb99e01598d96babaab8afdf476043edfa3812b90" exitCode=0 Feb 14 13:55:40 crc kubenswrapper[4750]: I0214 13:55:40.946719 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7r67" event={"ID":"d67ac662-6ad8-42d6-be21-e600814a6074","Type":"ContainerDied","Data":"6a9a6aa4eef063ef7b4c2d3eb99e01598d96babaab8afdf476043edfa3812b90"} Feb 14 13:55:40 crc kubenswrapper[4750]: I0214 13:55:40.950360 4750 generic.go:334] "Generic (PLEG): container finished" podID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerID="3340a86cb5e60b5396023c5d00c1bb5608dd5f7c6beae9470778ccb4d89aa5a6" exitCode=0 Feb 14 13:55:40 crc kubenswrapper[4750]: I0214 13:55:40.950392 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vxzp" event={"ID":"76bd762e-0ab4-4d0a-8718-2fc5a0f23747","Type":"ContainerDied","Data":"3340a86cb5e60b5396023c5d00c1bb5608dd5f7c6beae9470778ccb4d89aa5a6"} Feb 14 13:55:42 crc kubenswrapper[4750]: I0214 13:55:42.968320 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7r67" event={"ID":"d67ac662-6ad8-42d6-be21-e600814a6074","Type":"ContainerStarted","Data":"25d03125e1c83ba1e532a4c060fe98749cf64816fb991e664eca7ba2b25c23d0"} Feb 14 13:55:42 crc kubenswrapper[4750]: I0214 13:55:42.973470 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vxzp" event={"ID":"76bd762e-0ab4-4d0a-8718-2fc5a0f23747","Type":"ContainerStarted","Data":"967681ad634c5bc057574db531f6490afda5f5f3688bf42472db576e2c7e8db8"} Feb 14 13:55:43 crc kubenswrapper[4750]: I0214 13:55:43.003417 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7r67" podStartSLOduration=3.353011949 podStartE2EDuration="1m3.003393139s" podCreationTimestamp="2026-02-14 13:54:40 +0000 UTC" firstStartedPulling="2026-02-14 13:54:42.39080366 +0000 UTC m=+154.416793141" lastFinishedPulling="2026-02-14 13:55:42.04118481 +0000 UTC m=+214.067174331" observedRunningTime="2026-02-14 13:55:43.000557362 +0000 UTC m=+215.026546873" watchObservedRunningTime="2026-02-14 13:55:43.003393139 +0000 UTC m=+215.029382660" Feb 14 13:55:43 crc kubenswrapper[4750]: I0214 13:55:43.027844 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9vxzp" podStartSLOduration=3.5106022169999997 podStartE2EDuration="1m4.027807813s" podCreationTimestamp="2026-02-14 13:54:39 +0000 UTC" firstStartedPulling="2026-02-14 13:54:41.272687159 +0000 UTC m=+153.298676640" lastFinishedPulling="2026-02-14 13:55:41.789892725 +0000 UTC m=+213.815882236" observedRunningTime="2026-02-14 13:55:43.019445686 +0000 UTC m=+215.045435237" watchObservedRunningTime="2026-02-14 13:55:43.027807813 +0000 UTC m=+215.053797334" Feb 14 13:55:48 crc kubenswrapper[4750]: I0214 13:55:48.004962 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdd5l" event={"ID":"f2d2bcf7-e24d-4879-85df-777e045107ad","Type":"ContainerStarted","Data":"c53cf1b61ed6b75e7722bb572445e0d31610f39bd49313dd0c27838b730cb79e"} Feb 14 13:55:48 crc kubenswrapper[4750]: I0214 13:55:48.007105 4750 generic.go:334] "Generic (PLEG): container finished" podID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerID="790351b5f06b27cad72ca24f95b2ca812b62f9b38181dc67ffafb59ece7a4c20" exitCode=0 Feb 14 13:55:48 crc kubenswrapper[4750]: I0214 13:55:48.007161 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8hcx" event={"ID":"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8","Type":"ContainerDied","Data":"790351b5f06b27cad72ca24f95b2ca812b62f9b38181dc67ffafb59ece7a4c20"} Feb 14 13:55:49 crc kubenswrapper[4750]: I0214 13:55:49.017256 4750 generic.go:334] "Generic (PLEG): container finished" podID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerID="c53cf1b61ed6b75e7722bb572445e0d31610f39bd49313dd0c27838b730cb79e" exitCode=0 Feb 14 13:55:49 crc kubenswrapper[4750]: I0214 13:55:49.017360 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdd5l" event={"ID":"f2d2bcf7-e24d-4879-85df-777e045107ad","Type":"ContainerDied","Data":"c53cf1b61ed6b75e7722bb572445e0d31610f39bd49313dd0c27838b730cb79e"} Feb 14 13:55:49 crc kubenswrapper[4750]: I0214 13:55:49.705669 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:55:49 crc kubenswrapper[4750]: I0214 13:55:49.705725 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:55:49 crc kubenswrapper[4750]: I0214 13:55:49.822820 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:55:50 crc kubenswrapper[4750]: I0214 13:55:50.026598 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8hcx" event={"ID":"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8","Type":"ContainerStarted","Data":"6f15605753cef855d25442b204260d4dd140669bc217ca0eb4db0acffec38435"} Feb 14 13:55:50 crc kubenswrapper[4750]: I0214 13:55:50.034402 4750 generic.go:334] "Generic (PLEG): container finished" podID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerID="6bd3cb6ba58d941863b378fb29831d5d944607a16c296555919aa903299b1723" exitCode=0 Feb 14 13:55:50 crc kubenswrapper[4750]: I0214 13:55:50.034646 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qqsx" event={"ID":"eda53ecd-2d72-46c1-b809-c5753c174cd9","Type":"ContainerDied","Data":"6bd3cb6ba58d941863b378fb29831d5d944607a16c296555919aa903299b1723"} Feb 14 13:55:50 crc kubenswrapper[4750]: I0214 13:55:50.039486 4750 generic.go:334] "Generic (PLEG): container finished" podID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerID="d065b3cb66e75750cbd13dd16ee5799edbe3b9e8dd85861b2880ee831346ba5c" exitCode=0 Feb 14 13:55:50 crc kubenswrapper[4750]: I0214 13:55:50.039537 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96865" event={"ID":"5c5e32c6-03aa-44ee-9355-346b7b6c516c","Type":"ContainerDied","Data":"d065b3cb66e75750cbd13dd16ee5799edbe3b9e8dd85861b2880ee831346ba5c"} Feb 14 13:55:50 crc kubenswrapper[4750]: I0214 13:55:50.048091 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdd5l" event={"ID":"f2d2bcf7-e24d-4879-85df-777e045107ad","Type":"ContainerStarted","Data":"bf4a2c153d9aef0a31213671bbd61ab3c28e40d4c685b42c92441e618ecf5cf0"} Feb 14 13:55:50 crc kubenswrapper[4750]: I0214 13:55:50.051323 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w8hcx" podStartSLOduration=3.118699593 podStartE2EDuration="1m13.051304245s" podCreationTimestamp="2026-02-14 13:54:37 +0000 UTC" firstStartedPulling="2026-02-14 13:54:39.10081291 +0000 UTC m=+151.126802391" lastFinishedPulling="2026-02-14 13:55:49.033417562 +0000 UTC m=+221.059407043" observedRunningTime="2026-02-14 13:55:50.045458356 +0000 UTC m=+222.071447837" watchObservedRunningTime="2026-02-14 13:55:50.051304245 +0000 UTC m=+222.077293726" Feb 14 13:55:50 crc kubenswrapper[4750]: I0214 13:55:50.104546 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:55:50 crc kubenswrapper[4750]: I0214 13:55:50.107631 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sdd5l" podStartSLOduration=2.70145072 podStartE2EDuration="1m13.107616257s" podCreationTimestamp="2026-02-14 13:54:37 +0000 UTC" firstStartedPulling="2026-02-14 13:54:39.098137252 +0000 UTC m=+151.124126733" lastFinishedPulling="2026-02-14 13:55:49.504302779 +0000 UTC m=+221.530292270" observedRunningTime="2026-02-14 13:55:50.106596359 +0000 UTC m=+222.132585860" watchObservedRunningTime="2026-02-14 13:55:50.107616257 +0000 UTC m=+222.133605738" Feb 14 13:55:51 crc kubenswrapper[4750]: I0214 13:55:51.038929 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:55:51 crc kubenswrapper[4750]: I0214 13:55:51.040273 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:55:51 crc kubenswrapper[4750]: I0214 13:55:51.060611 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qqsx" event={"ID":"eda53ecd-2d72-46c1-b809-c5753c174cd9","Type":"ContainerStarted","Data":"18f64db85577343678110935ace664691c8f3cf7ff3e74338fbcee641731897f"} Feb 14 13:55:51 crc kubenswrapper[4750]: I0214 13:55:51.073639 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96865" event={"ID":"5c5e32c6-03aa-44ee-9355-346b7b6c516c","Type":"ContainerStarted","Data":"32e6f022183bc1c92903b261085c8d136b5e31ccc45e608902ce4bba01da1081"} Feb 14 13:55:51 crc kubenswrapper[4750]: I0214 13:55:51.079758 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:55:51 crc kubenswrapper[4750]: I0214 13:55:51.083593 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7qqsx" podStartSLOduration=3.798618001 podStartE2EDuration="1m14.083584961s" podCreationTimestamp="2026-02-14 13:54:37 +0000 UTC" firstStartedPulling="2026-02-14 13:54:40.195919869 +0000 UTC m=+152.221909350" lastFinishedPulling="2026-02-14 13:55:50.480886829 +0000 UTC m=+222.506876310" observedRunningTime="2026-02-14 13:55:51.081064222 +0000 UTC m=+223.107053703" watchObservedRunningTime="2026-02-14 13:55:51.083584961 +0000 UTC m=+223.109574442" Feb 14 13:55:51 crc kubenswrapper[4750]: I0214 13:55:51.125340 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:55:51 crc kubenswrapper[4750]: I0214 13:55:51.130827 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-96865" podStartSLOduration=2.83440126 podStartE2EDuration="1m14.130816745s" podCreationTimestamp="2026-02-14 13:54:37 +0000 UTC" firstStartedPulling="2026-02-14 13:54:39.118470109 +0000 UTC m=+151.144459580" lastFinishedPulling="2026-02-14 13:55:50.414885584 +0000 UTC m=+222.440875065" observedRunningTime="2026-02-14 13:55:51.128830981 +0000 UTC m=+223.154820462" watchObservedRunningTime="2026-02-14 13:55:51.130816745 +0000 UTC m=+223.156806226" Feb 14 13:55:52 crc kubenswrapper[4750]: I0214 13:55:52.079932 4750 generic.go:334] "Generic (PLEG): container finished" podID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerID="b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f" exitCode=0 Feb 14 13:55:52 crc kubenswrapper[4750]: I0214 13:55:52.080039 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk92k" event={"ID":"3f741224-7534-4c4b-be4c-1e91015d0a37","Type":"ContainerDied","Data":"b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f"} Feb 14 13:55:54 crc kubenswrapper[4750]: I0214 13:55:54.369904 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7r67"] Feb 14 13:55:54 crc kubenswrapper[4750]: I0214 13:55:54.370909 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l7r67" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" containerName="registry-server" containerID="cri-o://25d03125e1c83ba1e532a4c060fe98749cf64816fb991e664eca7ba2b25c23d0" gracePeriod=2 Feb 14 13:55:55 crc kubenswrapper[4750]: I0214 13:55:55.098137 4750 generic.go:334] "Generic (PLEG): container finished" podID="d67ac662-6ad8-42d6-be21-e600814a6074" containerID="25d03125e1c83ba1e532a4c060fe98749cf64816fb991e664eca7ba2b25c23d0" exitCode=0 Feb 14 13:55:55 crc kubenswrapper[4750]: I0214 13:55:55.098181 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7r67" event={"ID":"d67ac662-6ad8-42d6-be21-e600814a6074","Type":"ContainerDied","Data":"25d03125e1c83ba1e532a4c060fe98749cf64816fb991e664eca7ba2b25c23d0"} Feb 14 13:55:56 crc kubenswrapper[4750]: I0214 13:55:56.599316 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:55:56 crc kubenswrapper[4750]: I0214 13:55:56.770622 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-utilities\") pod \"d67ac662-6ad8-42d6-be21-e600814a6074\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " Feb 14 13:55:56 crc kubenswrapper[4750]: I0214 13:55:56.770919 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4m67\" (UniqueName: \"kubernetes.io/projected/d67ac662-6ad8-42d6-be21-e600814a6074-kube-api-access-k4m67\") pod \"d67ac662-6ad8-42d6-be21-e600814a6074\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " Feb 14 13:55:56 crc kubenswrapper[4750]: I0214 13:55:56.770952 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-catalog-content\") pod \"d67ac662-6ad8-42d6-be21-e600814a6074\" (UID: \"d67ac662-6ad8-42d6-be21-e600814a6074\") " Feb 14 13:55:56 crc kubenswrapper[4750]: I0214 13:55:56.771346 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-utilities" (OuterVolumeSpecName: "utilities") pod "d67ac662-6ad8-42d6-be21-e600814a6074" (UID: "d67ac662-6ad8-42d6-be21-e600814a6074"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:55:56 crc kubenswrapper[4750]: I0214 13:55:56.777452 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d67ac662-6ad8-42d6-be21-e600814a6074-kube-api-access-k4m67" (OuterVolumeSpecName: "kube-api-access-k4m67") pod "d67ac662-6ad8-42d6-be21-e600814a6074" (UID: "d67ac662-6ad8-42d6-be21-e600814a6074"). InnerVolumeSpecName "kube-api-access-k4m67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:55:56 crc kubenswrapper[4750]: I0214 13:55:56.872605 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:55:56 crc kubenswrapper[4750]: I0214 13:55:56.872902 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4m67\" (UniqueName: \"kubernetes.io/projected/d67ac662-6ad8-42d6-be21-e600814a6074-kube-api-access-k4m67\") on node \"crc\" DevicePath \"\"" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.112301 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7r67" event={"ID":"d67ac662-6ad8-42d6-be21-e600814a6074","Type":"ContainerDied","Data":"6360a11958c650da6e3674c5a5002a18100008da2341b65ce58d6b318419e7db"} Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.112344 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7r67" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.112370 4750 scope.go:117] "RemoveContainer" containerID="25d03125e1c83ba1e532a4c060fe98749cf64816fb991e664eca7ba2b25c23d0" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.134297 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d67ac662-6ad8-42d6-be21-e600814a6074" (UID: "d67ac662-6ad8-42d6-be21-e600814a6074"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.176527 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ac662-6ad8-42d6-be21-e600814a6074-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.202287 4750 scope.go:117] "RemoveContainer" containerID="6a9a6aa4eef063ef7b4c2d3eb99e01598d96babaab8afdf476043edfa3812b90" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.432722 4750 scope.go:117] "RemoveContainer" containerID="af1bd3bcebc16de68daad2cd18d29988fc8c953c59b4957c245d60709df7f3f9" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.442575 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7r67"] Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.445183 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l7r67"] Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.470993 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.471040 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.512451 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.640391 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.640449 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.678985 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.841537 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-96865" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.841592 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-96865" Feb 14 13:55:57 crc kubenswrapper[4750]: I0214 13:55:57.896234 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-96865" Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.117738 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk92k" event={"ID":"3f741224-7534-4c4b-be4c-1e91015d0a37","Type":"ContainerStarted","Data":"4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589"} Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.119087 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbtg9" event={"ID":"d241cf01-4aa5-46af-9900-0c6a7880f9f4","Type":"ContainerStarted","Data":"542e9b1121ae1cbf5215c3adbb679ace9bb80038c1a2913d47e38ed9f34fe1ba"} Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.120641 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.120664 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.162099 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.167208 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.168489 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xk92k" podStartSLOduration=5.038595826 podStartE2EDuration="1m19.168456613s" podCreationTimestamp="2026-02-14 13:54:39 +0000 UTC" firstStartedPulling="2026-02-14 13:54:42.348199662 +0000 UTC m=+154.374189143" lastFinishedPulling="2026-02-14 13:55:56.478060439 +0000 UTC m=+228.504049930" observedRunningTime="2026-02-14 13:55:58.140182694 +0000 UTC m=+230.166172175" watchObservedRunningTime="2026-02-14 13:55:58.168456613 +0000 UTC m=+230.194446094" Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.170270 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.181306 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-96865" Feb 14 13:55:58 crc kubenswrapper[4750]: I0214 13:55:58.749710 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" path="/var/lib/kubelet/pods/d67ac662-6ad8-42d6-be21-e600814a6074/volumes" Feb 14 13:55:59 crc kubenswrapper[4750]: I0214 13:55:59.127601 4750 generic.go:334] "Generic (PLEG): container finished" podID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerID="542e9b1121ae1cbf5215c3adbb679ace9bb80038c1a2913d47e38ed9f34fe1ba" exitCode=0 Feb 14 13:55:59 crc kubenswrapper[4750]: I0214 13:55:59.127794 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbtg9" event={"ID":"d241cf01-4aa5-46af-9900-0c6a7880f9f4","Type":"ContainerDied","Data":"542e9b1121ae1cbf5215c3adbb679ace9bb80038c1a2913d47e38ed9f34fe1ba"} Feb 14 13:55:59 crc kubenswrapper[4750]: I0214 13:55:59.180406 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:55:59 crc kubenswrapper[4750]: I0214 13:55:59.769031 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96865"] Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.047285 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.047366 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.096181 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.128977 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.129047 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.129096 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.129823 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.129949 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b" gracePeriod=600 Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.137420 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbtg9" event={"ID":"d241cf01-4aa5-46af-9900-0c6a7880f9f4","Type":"ContainerStarted","Data":"4eb2d91110a213e73d7657b1b5e2ff5b8e7dfdced2b9adfebf33f59ce2732bc9"} Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.137965 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-96865" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerName="registry-server" containerID="cri-o://32e6f022183bc1c92903b261085c8d136b5e31ccc45e608902ce4bba01da1081" gracePeriod=2 Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.159325 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hbtg9" podStartSLOduration=2.999305623 podStartE2EDuration="1m20.15930442s" podCreationTimestamp="2026-02-14 13:54:40 +0000 UTC" firstStartedPulling="2026-02-14 13:54:42.356581412 +0000 UTC m=+154.382570893" lastFinishedPulling="2026-02-14 13:55:59.516580209 +0000 UTC m=+231.542569690" observedRunningTime="2026-02-14 13:56:00.152779172 +0000 UTC m=+232.178768653" watchObservedRunningTime="2026-02-14 13:56:00.15930442 +0000 UTC m=+232.185293901" Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.660985 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:56:00 crc kubenswrapper[4750]: I0214 13:56:00.661027 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:56:01 crc kubenswrapper[4750]: I0214 13:56:01.143558 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b" exitCode=0 Feb 14 13:56:01 crc kubenswrapper[4750]: I0214 13:56:01.143656 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b"} Feb 14 13:56:01 crc kubenswrapper[4750]: I0214 13:56:01.173005 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7qqsx"] Feb 14 13:56:01 crc kubenswrapper[4750]: I0214 13:56:01.173252 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7qqsx" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerName="registry-server" containerID="cri-o://18f64db85577343678110935ace664691c8f3cf7ff3e74338fbcee641731897f" gracePeriod=2 Feb 14 13:56:01 crc kubenswrapper[4750]: I0214 13:56:01.734945 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hbtg9" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerName="registry-server" probeResult="failure" output=< Feb 14 13:56:01 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 13:56:01 crc kubenswrapper[4750]: > Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.152952 4750 generic.go:334] "Generic (PLEG): container finished" podID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerID="18f64db85577343678110935ace664691c8f3cf7ff3e74338fbcee641731897f" exitCode=0 Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.153024 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qqsx" event={"ID":"eda53ecd-2d72-46c1-b809-c5753c174cd9","Type":"ContainerDied","Data":"18f64db85577343678110935ace664691c8f3cf7ff3e74338fbcee641731897f"} Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.154977 4750 generic.go:334] "Generic (PLEG): container finished" podID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerID="32e6f022183bc1c92903b261085c8d136b5e31ccc45e608902ce4bba01da1081" exitCode=0 Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.155032 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96865" event={"ID":"5c5e32c6-03aa-44ee-9355-346b7b6c516c","Type":"ContainerDied","Data":"32e6f022183bc1c92903b261085c8d136b5e31ccc45e608902ce4bba01da1081"} Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.157715 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"ed0706ee349da09831b26e5d759efb9be0265de4607faf38c0a7fea0110aee8e"} Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.506237 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96865" Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.652680 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gfgz\" (UniqueName: \"kubernetes.io/projected/5c5e32c6-03aa-44ee-9355-346b7b6c516c-kube-api-access-2gfgz\") pod \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.652735 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-utilities\") pod \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.652943 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-catalog-content\") pod \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\" (UID: \"5c5e32c6-03aa-44ee-9355-346b7b6c516c\") " Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.653860 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-utilities" (OuterVolumeSpecName: "utilities") pod "5c5e32c6-03aa-44ee-9355-346b7b6c516c" (UID: "5c5e32c6-03aa-44ee-9355-346b7b6c516c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.670303 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5e32c6-03aa-44ee-9355-346b7b6c516c-kube-api-access-2gfgz" (OuterVolumeSpecName: "kube-api-access-2gfgz") pod "5c5e32c6-03aa-44ee-9355-346b7b6c516c" (UID: "5c5e32c6-03aa-44ee-9355-346b7b6c516c"). InnerVolumeSpecName "kube-api-access-2gfgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.702682 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c5e32c6-03aa-44ee-9355-346b7b6c516c" (UID: "5c5e32c6-03aa-44ee-9355-346b7b6c516c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.753748 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gfgz\" (UniqueName: \"kubernetes.io/projected/5c5e32c6-03aa-44ee-9355-346b7b6c516c-kube-api-access-2gfgz\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.753775 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.753783 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5e32c6-03aa-44ee-9355-346b7b6c516c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.948927 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.958772 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-utilities\") pod \"eda53ecd-2d72-46c1-b809-c5753c174cd9\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.958815 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjsz8\" (UniqueName: \"kubernetes.io/projected/eda53ecd-2d72-46c1-b809-c5753c174cd9-kube-api-access-cjsz8\") pod \"eda53ecd-2d72-46c1-b809-c5753c174cd9\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.958836 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-catalog-content\") pod \"eda53ecd-2d72-46c1-b809-c5753c174cd9\" (UID: \"eda53ecd-2d72-46c1-b809-c5753c174cd9\") " Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.965277 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda53ecd-2d72-46c1-b809-c5753c174cd9-kube-api-access-cjsz8" (OuterVolumeSpecName: "kube-api-access-cjsz8") pod "eda53ecd-2d72-46c1-b809-c5753c174cd9" (UID: "eda53ecd-2d72-46c1-b809-c5753c174cd9"). InnerVolumeSpecName "kube-api-access-cjsz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:56:02 crc kubenswrapper[4750]: I0214 13:56:02.975990 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-utilities" (OuterVolumeSpecName: "utilities") pod "eda53ecd-2d72-46c1-b809-c5753c174cd9" (UID: "eda53ecd-2d72-46c1-b809-c5753c174cd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.015225 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eda53ecd-2d72-46c1-b809-c5753c174cd9" (UID: "eda53ecd-2d72-46c1-b809-c5753c174cd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.060096 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.060165 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjsz8\" (UniqueName: \"kubernetes.io/projected/eda53ecd-2d72-46c1-b809-c5753c174cd9-kube-api-access-cjsz8\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.060179 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda53ecd-2d72-46c1-b809-c5753c174cd9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.171768 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qqsx" event={"ID":"eda53ecd-2d72-46c1-b809-c5753c174cd9","Type":"ContainerDied","Data":"ff5f2ae3f5c9ad0c688ef646ad0b87da8a652db40045563894c5ffc02deefb11"} Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.171844 4750 scope.go:117] "RemoveContainer" containerID="18f64db85577343678110935ace664691c8f3cf7ff3e74338fbcee641731897f" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.172001 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qqsx" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.178834 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96865" event={"ID":"5c5e32c6-03aa-44ee-9355-346b7b6c516c","Type":"ContainerDied","Data":"9e9e05b121615e6e118e23aa24d7ae3f8162e0ba2a9cd7a2905ed319d8b495ce"} Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.178861 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96865" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.196569 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96865"] Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.199861 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-96865"] Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.200331 4750 scope.go:117] "RemoveContainer" containerID="6bd3cb6ba58d941863b378fb29831d5d944607a16c296555919aa903299b1723" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.215192 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7qqsx"] Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.217846 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7qqsx"] Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.222419 4750 scope.go:117] "RemoveContainer" containerID="b3823b2170f61a2e9d3ee12a1646952d0c016413d145c8b00452cc13ec49d70f" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.236421 4750 scope.go:117] "RemoveContainer" containerID="32e6f022183bc1c92903b261085c8d136b5e31ccc45e608902ce4bba01da1081" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.247987 4750 scope.go:117] "RemoveContainer" containerID="d065b3cb66e75750cbd13dd16ee5799edbe3b9e8dd85861b2880ee831346ba5c" Feb 14 13:56:03 crc kubenswrapper[4750]: I0214 13:56:03.269562 4750 scope.go:117] "RemoveContainer" containerID="66d3c9648ad4841ba6c2d6462ad5492cb9704a42a7b5f2f50313df3727c42bcf" Feb 14 13:56:04 crc kubenswrapper[4750]: I0214 13:56:04.750062 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" path="/var/lib/kubelet/pods/5c5e32c6-03aa-44ee-9355-346b7b6c516c/volumes" Feb 14 13:56:04 crc kubenswrapper[4750]: I0214 13:56:04.751098 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" path="/var/lib/kubelet/pods/eda53ecd-2d72-46c1-b809-c5753c174cd9/volumes" Feb 14 13:56:10 crc kubenswrapper[4750]: I0214 13:56:10.106427 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:56:10 crc kubenswrapper[4750]: I0214 13:56:10.709525 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:56:10 crc kubenswrapper[4750]: I0214 13:56:10.774898 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:56:11 crc kubenswrapper[4750]: I0214 13:56:11.083237 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f84rf"] Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.795865 4750 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.796788 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" containerName="registry-server" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.796808 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" containerName="registry-server" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.796824 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerName="extract-utilities" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.796834 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerName="extract-utilities" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.796845 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerName="registry-server" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.796879 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerName="registry-server" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.796896 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerName="registry-server" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.796903 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerName="registry-server" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.796916 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" containerName="extract-content" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.796924 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" containerName="extract-content" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.796967 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerName="extract-content" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.796975 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerName="extract-content" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.796987 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" containerName="extract-utilities" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.796995 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" containerName="extract-utilities" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.797006 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerName="extract-utilities" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.797037 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerName="extract-utilities" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.797050 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerName="extract-content" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.797057 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerName="extract-content" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.797072 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd859b0d-57f2-4c9a-82a2-8ea16d069f3d" containerName="pruner" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.797079 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd859b0d-57f2-4c9a-82a2-8ea16d069f3d" containerName="pruner" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.797336 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5e32c6-03aa-44ee-9355-346b7b6c516c" containerName="registry-server" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.797360 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="d67ac662-6ad8-42d6-be21-e600814a6074" containerName="registry-server" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.797402 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd859b0d-57f2-4c9a-82a2-8ea16d069f3d" containerName="pruner" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.797413 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda53ecd-2d72-46c1-b809-c5753c174cd9" containerName="registry-server" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.797897 4750 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.798244 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b" gracePeriod=15 Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.798365 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701" gracePeriod=15 Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.798435 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a" gracePeriod=15 Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.798409 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08" gracePeriod=15 Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.798320 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.798423 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e" gracePeriod=15 Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.798855 4750 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.799064 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799075 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.799085 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799090 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.799104 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799132 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.799138 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799143 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.799154 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799159 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.799167 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799172 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 13:56:12 crc kubenswrapper[4750]: E0214 13:56:12.799182 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799188 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799279 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799290 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799301 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799310 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799317 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.799325 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.803134 4750 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.895926 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.895990 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.896059 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.896097 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.896149 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.896171 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.896241 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.896324 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997282 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997338 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997360 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997386 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997422 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997432 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997461 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997488 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997511 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997539 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997584 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997593 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997628 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997462 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997682 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:12 crc kubenswrapper[4750]: I0214 13:56:12.997717 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.235055 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.236540 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.237256 4750 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08" exitCode=0 Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.237299 4750 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701" exitCode=0 Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.237316 4750 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a" exitCode=0 Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.237327 4750 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e" exitCode=2 Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.237397 4750 scope.go:117] "RemoveContainer" containerID="f976e29961f218d2bd56a8bfb0e616247c9a76fa0f79b6d80320938a8ebeda6c" Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.239811 4750 generic.go:334] "Generic (PLEG): container finished" podID="38fcd686-2510-46e2-9241-bfc27bfe23cc" containerID="795a47c241ad27c87f4ac6f24df7a4ea8c1fa12b108bcfd1edcc944aa066cdce" exitCode=0 Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.239848 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"38fcd686-2510-46e2-9241-bfc27bfe23cc","Type":"ContainerDied","Data":"795a47c241ad27c87f4ac6f24df7a4ea8c1fa12b108bcfd1edcc944aa066cdce"} Feb 14 13:56:13 crc kubenswrapper[4750]: I0214 13:56:13.240620 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.251923 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.655866 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.656701 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.721361 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-kubelet-dir\") pod \"38fcd686-2510-46e2-9241-bfc27bfe23cc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.721443 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-var-lock\") pod \"38fcd686-2510-46e2-9241-bfc27bfe23cc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.721467 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "38fcd686-2510-46e2-9241-bfc27bfe23cc" (UID: "38fcd686-2510-46e2-9241-bfc27bfe23cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.721490 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-var-lock" (OuterVolumeSpecName: "var-lock") pod "38fcd686-2510-46e2-9241-bfc27bfe23cc" (UID: "38fcd686-2510-46e2-9241-bfc27bfe23cc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.721503 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38fcd686-2510-46e2-9241-bfc27bfe23cc-kube-api-access\") pod \"38fcd686-2510-46e2-9241-bfc27bfe23cc\" (UID: \"38fcd686-2510-46e2-9241-bfc27bfe23cc\") " Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.721732 4750 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.721754 4750 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38fcd686-2510-46e2-9241-bfc27bfe23cc-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.730085 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fcd686-2510-46e2-9241-bfc27bfe23cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "38fcd686-2510-46e2-9241-bfc27bfe23cc" (UID: "38fcd686-2510-46e2-9241-bfc27bfe23cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:56:14 crc kubenswrapper[4750]: I0214 13:56:14.823701 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38fcd686-2510-46e2-9241-bfc27bfe23cc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:15 crc kubenswrapper[4750]: I0214 13:56:15.261303 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"38fcd686-2510-46e2-9241-bfc27bfe23cc","Type":"ContainerDied","Data":"8e1a4d897f6d1853752376e178dff310a721202985f22f8c2968fc90dab75258"} Feb 14 13:56:15 crc kubenswrapper[4750]: I0214 13:56:15.261714 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e1a4d897f6d1853752376e178dff310a721202985f22f8c2968fc90dab75258" Feb 14 13:56:15 crc kubenswrapper[4750]: I0214 13:56:15.261409 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 13:56:15 crc kubenswrapper[4750]: I0214 13:56:15.268054 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.271200 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.272181 4750 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b" exitCode=0 Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.272226 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72717fa1201d3afcac38ba6c75dd818fa829baaf41a30134a3ea1cbd790d2afd" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.287921 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.288809 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.289763 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.290155 4750 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.456809 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.456916 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.456983 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.456970 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.457007 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.457048 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.457557 4750 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.457596 4750 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.457615 4750 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:16 crc kubenswrapper[4750]: I0214 13:56:16.755539 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 14 13:56:17 crc kubenswrapper[4750]: I0214 13:56:17.280249 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:17 crc kubenswrapper[4750]: I0214 13:56:17.283955 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:17 crc kubenswrapper[4750]: I0214 13:56:17.284529 4750 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:17 crc kubenswrapper[4750]: I0214 13:56:17.285399 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:17 crc kubenswrapper[4750]: I0214 13:56:17.285897 4750 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:17 crc kubenswrapper[4750]: E0214 13:56:17.857812 4750 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:17 crc kubenswrapper[4750]: I0214 13:56:17.858493 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:17 crc kubenswrapper[4750]: W0214 13:56:17.904418 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-8a6b5e74547e127892eebbbdf32404e6d3ed809dfdc941ecd23fb9190b5cd0bb WatchSource:0}: Error finding container 8a6b5e74547e127892eebbbdf32404e6d3ed809dfdc941ecd23fb9190b5cd0bb: Status 404 returned error can't find the container with id 8a6b5e74547e127892eebbbdf32404e6d3ed809dfdc941ecd23fb9190b5cd0bb Feb 14 13:56:17 crc kubenswrapper[4750]: E0214 13:56:17.909545 4750 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894217ddbfb6d3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 13:56:17.908763962 +0000 UTC m=+249.934753433,LastTimestamp:2026-02-14 13:56:17.908763962 +0000 UTC m=+249.934753433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 13:56:18 crc kubenswrapper[4750]: I0214 13:56:18.286932 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8"} Feb 14 13:56:18 crc kubenswrapper[4750]: I0214 13:56:18.287399 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8a6b5e74547e127892eebbbdf32404e6d3ed809dfdc941ecd23fb9190b5cd0bb"} Feb 14 13:56:18 crc kubenswrapper[4750]: I0214 13:56:18.289282 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:18 crc kubenswrapper[4750]: E0214 13:56:18.289309 4750 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:56:18 crc kubenswrapper[4750]: I0214 13:56:18.289677 4750 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:18 crc kubenswrapper[4750]: I0214 13:56:18.746744 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:18 crc kubenswrapper[4750]: I0214 13:56:18.747877 4750 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:20 crc kubenswrapper[4750]: E0214 13:56:20.080357 4750 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:20 crc kubenswrapper[4750]: E0214 13:56:20.082705 4750 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:20 crc kubenswrapper[4750]: E0214 13:56:20.083430 4750 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:20 crc kubenswrapper[4750]: E0214 13:56:20.084082 4750 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:20 crc kubenswrapper[4750]: E0214 13:56:20.084841 4750 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:20 crc kubenswrapper[4750]: I0214 13:56:20.085068 4750 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 14 13:56:20 crc kubenswrapper[4750]: E0214 13:56:20.085664 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Feb 14 13:56:20 crc kubenswrapper[4750]: E0214 13:56:20.287159 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Feb 14 13:56:20 crc kubenswrapper[4750]: E0214 13:56:20.692988 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Feb 14 13:56:20 crc kubenswrapper[4750]: E0214 13:56:20.827406 4750 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" volumeName="registry-storage" Feb 14 13:56:21 crc kubenswrapper[4750]: E0214 13:56:21.494164 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Feb 14 13:56:23 crc kubenswrapper[4750]: E0214 13:56:23.057874 4750 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894217ddbfb6d3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 13:56:17.908763962 +0000 UTC m=+249.934753433,LastTimestamp:2026-02-14 13:56:17.908763962 +0000 UTC m=+249.934753433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 13:56:23 crc kubenswrapper[4750]: E0214 13:56:23.095939 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="3.2s" Feb 14 13:56:25 crc kubenswrapper[4750]: I0214 13:56:25.338011 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 13:56:25 crc kubenswrapper[4750]: I0214 13:56:25.338098 4750 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1" exitCode=1 Feb 14 13:56:25 crc kubenswrapper[4750]: I0214 13:56:25.338212 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1"} Feb 14 13:56:25 crc kubenswrapper[4750]: I0214 13:56:25.338987 4750 scope.go:117] "RemoveContainer" containerID="cbaa5e1698251c1f93b6000dd32e243d9226ca2e32e7f5965d881230c1e74cf1" Feb 14 13:56:25 crc kubenswrapper[4750]: I0214 13:56:25.340268 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:25 crc kubenswrapper[4750]: I0214 13:56:25.340823 4750 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:26 crc kubenswrapper[4750]: E0214 13:56:26.297369 4750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="6.4s" Feb 14 13:56:26 crc kubenswrapper[4750]: I0214 13:56:26.352898 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 13:56:26 crc kubenswrapper[4750]: I0214 13:56:26.352996 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"017a54a02e9fc332363fea16815ed60d5de384e4742c1b9bd324ce7877ce8e01"} Feb 14 13:56:26 crc kubenswrapper[4750]: I0214 13:56:26.354278 4750 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:26 crc kubenswrapper[4750]: I0214 13:56:26.354889 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:27 crc kubenswrapper[4750]: I0214 13:56:27.741625 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:27 crc kubenswrapper[4750]: I0214 13:56:27.743607 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:27 crc kubenswrapper[4750]: I0214 13:56:27.743947 4750 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:27 crc kubenswrapper[4750]: I0214 13:56:27.755378 4750 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:27 crc kubenswrapper[4750]: I0214 13:56:27.755420 4750 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:27 crc kubenswrapper[4750]: E0214 13:56:27.755840 4750 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:27 crc kubenswrapper[4750]: I0214 13:56:27.756298 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.370636 4750 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="bd912cdade9666e8ad942cd1caa771116f9ac53de09e2309bd5c79dc0c6c2217" exitCode=0 Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.370862 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"bd912cdade9666e8ad942cd1caa771116f9ac53de09e2309bd5c79dc0c6c2217"} Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.370889 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b92b2c13fa020002487d0361c119511a7002e111be83ee49eaaebb8914bfe580"} Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.371135 4750 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.371146 4750 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:28 crc kubenswrapper[4750]: E0214 13:56:28.371451 4750 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.372228 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.372970 4750 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.748302 4750 status_manager.go:851] "Failed to get status for pod" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.748628 4750 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.749074 4750 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 14 13:56:28 crc kubenswrapper[4750]: I0214 13:56:28.848713 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:56:29 crc kubenswrapper[4750]: I0214 13:56:29.384400 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d1b07a8435cab14fe08f5ee2c7e70ef87c652fc96d18c5c8f548bf604438f27a"} Feb 14 13:56:29 crc kubenswrapper[4750]: I0214 13:56:29.384702 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b65eca73986fefa93b7358b621342f53caf00cf5181d2ded29a97adc0db1dbaf"} Feb 14 13:56:29 crc kubenswrapper[4750]: I0214 13:56:29.384718 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cd76a2278c6d0451f06eab2af949b34f48a627a9bb4e903e2554830054bb6b12"} Feb 14 13:56:30 crc kubenswrapper[4750]: I0214 13:56:30.402623 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8e580f95cc0929e9e6a9b32598113ba75cc894ed8d515fdee674b321e32c0f3a"} Feb 14 13:56:30 crc kubenswrapper[4750]: I0214 13:56:30.402692 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c7c991fdbfa2e9518841945482c2cf8cb043bd0a34c0ad6f3e72365e915b43fe"} Feb 14 13:56:30 crc kubenswrapper[4750]: I0214 13:56:30.403286 4750 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:30 crc kubenswrapper[4750]: I0214 13:56:30.403318 4750 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:30 crc kubenswrapper[4750]: I0214 13:56:30.404221 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:32 crc kubenswrapper[4750]: I0214 13:56:32.714950 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:56:32 crc kubenswrapper[4750]: I0214 13:56:32.723596 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:56:32 crc kubenswrapper[4750]: I0214 13:56:32.756442 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:32 crc kubenswrapper[4750]: I0214 13:56:32.756500 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:32 crc kubenswrapper[4750]: I0214 13:56:32.765252 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:35 crc kubenswrapper[4750]: I0214 13:56:35.517403 4750 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:35 crc kubenswrapper[4750]: I0214 13:56:35.549528 4750 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:56:28Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T13:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd76a2278c6d0451f06eab2af949b34f48a627a9bb4e903e2554830054bb6b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b07a8435cab14fe08f5ee2c7e70ef87c652fc96d18c5c8f548bf604438f27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b65eca73986fefa93b7358b621342f53caf00cf5181d2ded29a97adc0db1dbaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e580f95cc0929e9e6a9b32598113ba75cc894ed8d515fdee674b321e32c0f3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7c991fdbfa2e9518841945482c2cf8cb043bd0a34c0ad6f3e72365e915b43fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T13:56:29Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd912cdade9666e8ad942cd1caa771116f9ac53de09e2309bd5c79dc0c6c2217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd912cdade9666e8ad942cd1caa771116f9ac53de09e2309bd5c79dc0c6c2217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T13:56:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T13:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"d5b091f8-0f4b-40b6-acf8-14b3ccdef465\": field is immutable" Feb 14 13:56:35 crc kubenswrapper[4750]: I0214 13:56:35.619530 4750 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b6b12222-8d3e-428d-81ee-27c67f7bb0e3" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.139410 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" podUID="d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" containerName="oauth-openshift" containerID="cri-o://9d0539598a001c63699dff622a687f7cbb20a5f152389c4e3b1f243702a19c3e" gracePeriod=15 Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.453963 4750 generic.go:334] "Generic (PLEG): container finished" podID="d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" containerID="9d0539598a001c63699dff622a687f7cbb20a5f152389c4e3b1f243702a19c3e" exitCode=0 Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.454716 4750 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.454747 4750 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.454995 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" event={"ID":"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6","Type":"ContainerDied","Data":"9d0539598a001c63699dff622a687f7cbb20a5f152389c4e3b1f243702a19c3e"} Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.461371 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.461494 4750 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b6b12222-8d3e-428d-81ee-27c67f7bb0e3" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.534772 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.640182 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-trusted-ca-bundle\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.640467 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-ocp-branding-template\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.640616 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-idp-0-file-data\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.640736 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-policies\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.640902 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-session\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641031 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641179 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-cliconfig\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641283 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641294 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-error\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641369 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-service-ca\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641395 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-provider-selection\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641435 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-login\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641465 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6t54\" (UniqueName: \"kubernetes.io/projected/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-kube-api-access-r6t54\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641487 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-serving-cert\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641503 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-dir\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641523 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-router-certs\") pod \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\" (UID: \"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6\") " Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641680 4750 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641692 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641707 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.641753 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.642120 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.646531 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.646575 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-kube-api-access-r6t54" (OuterVolumeSpecName: "kube-api-access-r6t54") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "kube-api-access-r6t54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.646740 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.647302 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.647433 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.647904 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.652520 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.652845 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.652843 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" (UID: "d764ae6c-05b7-46c7-be61-9b7dfbcd63b6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.742981 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743035 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743713 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743766 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743788 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743811 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743833 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743853 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743873 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6t54\" (UniqueName: \"kubernetes.io/projected/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-kube-api-access-r6t54\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743892 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743913 4750 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:36 crc kubenswrapper[4750]: I0214 13:56:36.743936 4750 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 13:56:37 crc kubenswrapper[4750]: I0214 13:56:37.469029 4750 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:37 crc kubenswrapper[4750]: I0214 13:56:37.469084 4750 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5b091f8-0f4b-40b6-acf8-14b3ccdef465" Feb 14 13:56:37 crc kubenswrapper[4750]: I0214 13:56:37.470525 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" Feb 14 13:56:37 crc kubenswrapper[4750]: I0214 13:56:37.471925 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f84rf" event={"ID":"d764ae6c-05b7-46c7-be61-9b7dfbcd63b6","Type":"ContainerDied","Data":"019cca46ae840f14274e5325f416cdc4fe46b531a60308bd7338f491f6c0529e"} Feb 14 13:56:37 crc kubenswrapper[4750]: I0214 13:56:37.472043 4750 scope.go:117] "RemoveContainer" containerID="9d0539598a001c63699dff622a687f7cbb20a5f152389c4e3b1f243702a19c3e" Feb 14 13:56:37 crc kubenswrapper[4750]: I0214 13:56:37.479085 4750 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b6b12222-8d3e-428d-81ee-27c67f7bb0e3" Feb 14 13:56:38 crc kubenswrapper[4750]: I0214 13:56:38.851711 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 13:56:45 crc kubenswrapper[4750]: I0214 13:56:45.025040 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 13:56:45 crc kubenswrapper[4750]: I0214 13:56:45.068002 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 13:56:45 crc kubenswrapper[4750]: I0214 13:56:45.687837 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 13:56:45 crc kubenswrapper[4750]: I0214 13:56:45.716811 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 13:56:46 crc kubenswrapper[4750]: I0214 13:56:46.438841 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 13:56:46 crc kubenswrapper[4750]: I0214 13:56:46.465994 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 13:56:46 crc kubenswrapper[4750]: I0214 13:56:46.499782 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 13:56:46 crc kubenswrapper[4750]: I0214 13:56:46.710462 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 13:56:46 crc kubenswrapper[4750]: I0214 13:56:46.935423 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 13:56:46 crc kubenswrapper[4750]: I0214 13:56:46.991775 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 13:56:47 crc kubenswrapper[4750]: I0214 13:56:47.074898 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 13:56:47 crc kubenswrapper[4750]: I0214 13:56:47.092749 4750 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 13:56:47 crc kubenswrapper[4750]: I0214 13:56:47.361416 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 13:56:47 crc kubenswrapper[4750]: I0214 13:56:47.405275 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 13:56:47 crc kubenswrapper[4750]: I0214 13:56:47.513553 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 13:56:47 crc kubenswrapper[4750]: I0214 13:56:47.932967 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.151885 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.460682 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.508399 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.634750 4750 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.639540 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.641975 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.671416 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.707539 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.766316 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.767700 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.826223 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.948338 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 13:56:48 crc kubenswrapper[4750]: I0214 13:56:48.987306 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.025570 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.180453 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.228766 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.312958 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.535514 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.565867 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.586708 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.592784 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.604192 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.663039 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.759786 4750 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.779089 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.795139 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.818438 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.831066 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.933195 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.972827 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 13:56:49 crc kubenswrapper[4750]: I0214 13:56:49.990678 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.091801 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.092550 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.094244 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.100627 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.101628 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.169199 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.171392 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.279281 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.293996 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.304651 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.326737 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.437998 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.486895 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.492557 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.572530 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.606298 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.746151 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.819443 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.831390 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 13:56:50 crc kubenswrapper[4750]: I0214 13:56:50.991818 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.040675 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.051907 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.103396 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.182414 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.324182 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.325046 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.345818 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.348697 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.452475 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.511037 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.589253 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.618330 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.628977 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.731070 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.762619 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.821540 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 13:56:51 crc kubenswrapper[4750]: I0214 13:56:51.880921 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.035916 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.036893 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.063723 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.074929 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.285077 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.302891 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.344088 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.460368 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.480407 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.534430 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.560981 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.595853 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.656897 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.744141 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.796217 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.828513 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.898030 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 13:56:52 crc kubenswrapper[4750]: I0214 13:56:52.987846 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.013888 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.037031 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.059406 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.137771 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.334830 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.335665 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.353177 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.439559 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.457070 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.486827 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.534001 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.535223 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.885225 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.923461 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 13:56:53 crc kubenswrapper[4750]: I0214 13:56:53.974765 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.042424 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.129643 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.135744 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.305433 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.344988 4750 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.382456 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.387527 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.405422 4750 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.407602 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.413196 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-f84rf"] Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.413304 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.422434 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.424499 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.449854 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.44982676 podStartE2EDuration="19.44982676s" podCreationTimestamp="2026-02-14 13:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:56:54.438582451 +0000 UTC m=+286.464571942" watchObservedRunningTime="2026-02-14 13:56:54.44982676 +0000 UTC m=+286.475816281" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.465557 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.514078 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.555214 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.628622 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.659359 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.716335 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.754576 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" path="/var/lib/kubelet/pods/d764ae6c-05b7-46c7-be61-9b7dfbcd63b6/volumes" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.781055 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.853048 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 13:56:54 crc kubenswrapper[4750]: I0214 13:56:54.869061 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.022549 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.025486 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.038604 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.086078 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.153643 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.178970 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.185923 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.252400 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.268394 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.348898 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.410307 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.418250 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.495344 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.516010 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.582011 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.590440 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.633177 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.636572 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.650426 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.761360 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.871947 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 13:56:55 crc kubenswrapper[4750]: I0214 13:56:55.939409 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.127778 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.175565 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.198979 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.209691 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.333219 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.380822 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.451614 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.586384 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.718531 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.744342 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.765195 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.821022 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.822609 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.833795 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.853985 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.907958 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.977061 4750 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 13:56:56 crc kubenswrapper[4750]: I0214 13:56:56.977418 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8" gracePeriod=5 Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.074868 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.090299 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.135508 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.311189 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.403156 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.406382 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.412016 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.486291 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.513271 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.650286 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.709852 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.797322 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.908965 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.933616 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 13:56:57 crc kubenswrapper[4750]: I0214 13:56:57.985639 4750 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.000777 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.025316 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.084929 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.088988 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.150380 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.209309 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.342033 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.410318 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.480152 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.514560 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.548614 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.551988 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.615994 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.737924 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.766693 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.858559 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 13:56:58 crc kubenswrapper[4750]: I0214 13:56:58.889549 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.058312 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.215134 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.284744 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.305091 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7489ccbc46-n8frn"] Feb 14 13:56:59 crc kubenswrapper[4750]: E0214 13:56:59.305657 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" containerName="oauth-openshift" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.305685 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" containerName="oauth-openshift" Feb 14 13:56:59 crc kubenswrapper[4750]: E0214 13:56:59.305721 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" containerName="installer" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.305734 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" containerName="installer" Feb 14 13:56:59 crc kubenswrapper[4750]: E0214 13:56:59.305754 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.305766 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.306054 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.306091 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="d764ae6c-05b7-46c7-be61-9b7dfbcd63b6" containerName="oauth-openshift" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.306147 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fcd686-2510-46e2-9241-bfc27bfe23cc" containerName="installer" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.312841 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.315650 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.318341 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.318495 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.319131 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.319263 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.321034 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.321287 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.321360 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.321439 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.321520 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.321963 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.322004 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.335945 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.339595 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7489ccbc46-n8frn"] Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.341495 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.342355 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.344323 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.509995 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9372a54-a373-496d-a5ed-ec1cd2bd731e-audit-dir\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.510308 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.510443 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.510544 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q9b2\" (UniqueName: \"kubernetes.io/projected/c9372a54-a373-496d-a5ed-ec1cd2bd731e-kube-api-access-4q9b2\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.510645 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-audit-policies\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.510755 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-template-login\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.510865 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.510979 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-session\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.511082 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.511201 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.511320 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-template-error\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.511435 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.511559 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.511664 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.512071 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.532627 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612589 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612666 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612700 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612750 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9372a54-a373-496d-a5ed-ec1cd2bd731e-audit-dir\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612771 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612807 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612842 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q9b2\" (UniqueName: \"kubernetes.io/projected/c9372a54-a373-496d-a5ed-ec1cd2bd731e-kube-api-access-4q9b2\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612879 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-audit-policies\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612901 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-template-login\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612917 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9372a54-a373-496d-a5ed-ec1cd2bd731e-audit-dir\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.612941 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.613051 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-session\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.613096 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.613174 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.613255 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-template-error\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.613505 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.614364 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.614618 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-audit-policies\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.615249 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.620209 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.620682 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.620906 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.620933 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.628585 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-session\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.629846 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.632205 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-template-error\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.638329 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q9b2\" (UniqueName: \"kubernetes.io/projected/c9372a54-a373-496d-a5ed-ec1cd2bd731e-kube-api-access-4q9b2\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.640670 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9372a54-a373-496d-a5ed-ec1cd2bd731e-v4-0-config-user-template-login\") pod \"oauth-openshift-7489ccbc46-n8frn\" (UID: \"c9372a54-a373-496d-a5ed-ec1cd2bd731e\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.686173 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.861381 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.933170 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:56:59 crc kubenswrapper[4750]: I0214 13:56:59.983128 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.124587 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.147752 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7489ccbc46-n8frn"] Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.167755 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.272742 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.400100 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.541172 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.586347 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.637827 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" event={"ID":"c9372a54-a373-496d-a5ed-ec1cd2bd731e","Type":"ContainerStarted","Data":"bd7ab093e4713d79fad50c25ea62656234d12636a2ca9f9ee9ca70eb5c1af226"} Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.637945 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" event={"ID":"c9372a54-a373-496d-a5ed-ec1cd2bd731e","Type":"ContainerStarted","Data":"2c837d314ef133b91836c334e8f1521d1c9d2b675cbbacd1f2f395be4e856e80"} Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.638330 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.669502 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" podStartSLOduration=49.66948418 podStartE2EDuration="49.66948418s" podCreationTimestamp="2026-02-14 13:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:57:00.661406837 +0000 UTC m=+292.687396318" watchObservedRunningTime="2026-02-14 13:57:00.66948418 +0000 UTC m=+292.695473661" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.751943 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.759236 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.973862 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 13:57:00 crc kubenswrapper[4750]: I0214 13:57:00.996470 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7489ccbc46-n8frn" Feb 14 13:57:01 crc kubenswrapper[4750]: I0214 13:57:01.154938 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 13:57:01 crc kubenswrapper[4750]: I0214 13:57:01.273522 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 13:57:01 crc kubenswrapper[4750]: I0214 13:57:01.348650 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 13:57:01 crc kubenswrapper[4750]: I0214 13:57:01.455619 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 13:57:01 crc kubenswrapper[4750]: I0214 13:57:01.495697 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.237478 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.263831 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.580060 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.580197 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.655442 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.655512 4750 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8" exitCode=137 Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.655610 4750 scope.go:117] "RemoveContainer" containerID="d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.655613 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.675561 4750 scope.go:117] "RemoveContainer" containerID="d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8" Feb 14 13:57:02 crc kubenswrapper[4750]: E0214 13:57:02.676147 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8\": container with ID starting with d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8 not found: ID does not exist" containerID="d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.676212 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8"} err="failed to get container status \"d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8\": rpc error: code = NotFound desc = could not find container \"d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8\": container with ID starting with d6f5575ffd5b7a82e10e06117aaedbf10db21b0d2b3042c920f79db869ae38b8 not found: ID does not exist" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.757590 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.757656 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.757762 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.757787 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.757824 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.757817 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.757865 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.757891 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.757886 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.758224 4750 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.758246 4750 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.758256 4750 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.758267 4750 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.768784 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 13:57:02 crc kubenswrapper[4750]: I0214 13:57:02.859846 4750 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:04 crc kubenswrapper[4750]: I0214 13:57:04.747675 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 14 13:57:08 crc kubenswrapper[4750]: I0214 13:57:08.498639 4750 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 14 13:57:31 crc kubenswrapper[4750]: I0214 13:57:31.522356 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 13:57:34 crc kubenswrapper[4750]: I0214 13:57:34.019043 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 13:57:35 crc kubenswrapper[4750]: I0214 13:57:35.235559 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 13:57:36 crc kubenswrapper[4750]: I0214 13:57:36.565669 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 13:57:37 crc kubenswrapper[4750]: I0214 13:57:37.401018 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 13:57:40 crc kubenswrapper[4750]: I0214 13:57:40.440563 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 13:57:41 crc kubenswrapper[4750]: I0214 13:57:41.993079 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk92k"] Feb 14 13:57:41 crc kubenswrapper[4750]: I0214 13:57:41.994886 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xk92k" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerName="registry-server" containerID="cri-o://4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589" gracePeriod=2 Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.410744 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.581208 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqhjk\" (UniqueName: \"kubernetes.io/projected/3f741224-7534-4c4b-be4c-1e91015d0a37-kube-api-access-gqhjk\") pod \"3f741224-7534-4c4b-be4c-1e91015d0a37\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.581322 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-utilities\") pod \"3f741224-7534-4c4b-be4c-1e91015d0a37\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.581400 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-catalog-content\") pod \"3f741224-7534-4c4b-be4c-1e91015d0a37\" (UID: \"3f741224-7534-4c4b-be4c-1e91015d0a37\") " Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.582913 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-utilities" (OuterVolumeSpecName: "utilities") pod "3f741224-7534-4c4b-be4c-1e91015d0a37" (UID: "3f741224-7534-4c4b-be4c-1e91015d0a37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.588007 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f741224-7534-4c4b-be4c-1e91015d0a37-kube-api-access-gqhjk" (OuterVolumeSpecName: "kube-api-access-gqhjk") pod "3f741224-7534-4c4b-be4c-1e91015d0a37" (UID: "3f741224-7534-4c4b-be4c-1e91015d0a37"). InnerVolumeSpecName "kube-api-access-gqhjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.613331 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f741224-7534-4c4b-be4c-1e91015d0a37" (UID: "3f741224-7534-4c4b-be4c-1e91015d0a37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.683762 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqhjk\" (UniqueName: \"kubernetes.io/projected/3f741224-7534-4c4b-be4c-1e91015d0a37-kube-api-access-gqhjk\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.684087 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.684282 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f741224-7534-4c4b-be4c-1e91015d0a37-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.924661 4750 generic.go:334] "Generic (PLEG): container finished" podID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerID="4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589" exitCode=0 Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.924715 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk92k" event={"ID":"3f741224-7534-4c4b-be4c-1e91015d0a37","Type":"ContainerDied","Data":"4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589"} Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.924748 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk92k" event={"ID":"3f741224-7534-4c4b-be4c-1e91015d0a37","Type":"ContainerDied","Data":"021c2165dc884d0a9e31a73f2e5c58c149c17b4d2687e642ab196e28cf49565c"} Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.924771 4750 scope.go:117] "RemoveContainer" containerID="4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.924765 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk92k" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.950829 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk92k"] Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.951198 4750 scope.go:117] "RemoveContainer" containerID="b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f" Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.962096 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk92k"] Feb 14 13:57:42 crc kubenswrapper[4750]: I0214 13:57:42.976777 4750 scope.go:117] "RemoveContainer" containerID="90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec" Feb 14 13:57:43 crc kubenswrapper[4750]: I0214 13:57:43.011951 4750 scope.go:117] "RemoveContainer" containerID="4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589" Feb 14 13:57:43 crc kubenswrapper[4750]: E0214 13:57:43.014762 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589\": container with ID starting with 4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589 not found: ID does not exist" containerID="4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589" Feb 14 13:57:43 crc kubenswrapper[4750]: I0214 13:57:43.014820 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589"} err="failed to get container status \"4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589\": rpc error: code = NotFound desc = could not find container \"4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589\": container with ID starting with 4c6147ffaef6497ef38107d7ac4bf8974891643d9a0f3466ff0db0600cee7589 not found: ID does not exist" Feb 14 13:57:43 crc kubenswrapper[4750]: I0214 13:57:43.014861 4750 scope.go:117] "RemoveContainer" containerID="b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f" Feb 14 13:57:43 crc kubenswrapper[4750]: E0214 13:57:43.015527 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f\": container with ID starting with b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f not found: ID does not exist" containerID="b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f" Feb 14 13:57:43 crc kubenswrapper[4750]: I0214 13:57:43.015598 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f"} err="failed to get container status \"b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f\": rpc error: code = NotFound desc = could not find container \"b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f\": container with ID starting with b56992a37009b3814d86c308212eda248d07284afbe9a9fa61314ff1b87e414f not found: ID does not exist" Feb 14 13:57:43 crc kubenswrapper[4750]: I0214 13:57:43.015651 4750 scope.go:117] "RemoveContainer" containerID="90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec" Feb 14 13:57:43 crc kubenswrapper[4750]: E0214 13:57:43.016221 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec\": container with ID starting with 90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec not found: ID does not exist" containerID="90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec" Feb 14 13:57:43 crc kubenswrapper[4750]: I0214 13:57:43.016271 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec"} err="failed to get container status \"90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec\": rpc error: code = NotFound desc = could not find container \"90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec\": container with ID starting with 90bc03b9069a5d44f26b0d5f9226f595ab9a6e5e761aa2dcf136594ea6494bec not found: ID does not exist" Feb 14 13:57:44 crc kubenswrapper[4750]: I0214 13:57:44.753763 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" path="/var/lib/kubelet/pods/3f741224-7534-4c4b-be4c-1e91015d0a37/volumes" Feb 14 13:57:57 crc kubenswrapper[4750]: I0214 13:57:57.950952 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ktrwb"] Feb 14 13:57:57 crc kubenswrapper[4750]: I0214 13:57:57.951782 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" podUID="ed95f649-6925-4589-b1fe-90d10d2b266a" containerName="controller-manager" containerID="cri-o://08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d" gracePeriod=30 Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.049453 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9"] Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.049676 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" podUID="81c0849a-dd49-44b8-a94d-ad1138ab0246" containerName="route-controller-manager" containerID="cri-o://bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed" gracePeriod=30 Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.317527 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.406882 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-client-ca\") pod \"ed95f649-6925-4589-b1fe-90d10d2b266a\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.407894 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-client-ca" (OuterVolumeSpecName: "client-ca") pod "ed95f649-6925-4589-b1fe-90d10d2b266a" (UID: "ed95f649-6925-4589-b1fe-90d10d2b266a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.408228 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-proxy-ca-bundles\") pod \"ed95f649-6925-4589-b1fe-90d10d2b266a\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.408653 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed95f649-6925-4589-b1fe-90d10d2b266a-serving-cert\") pod \"ed95f649-6925-4589-b1fe-90d10d2b266a\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.411900 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ed95f649-6925-4589-b1fe-90d10d2b266a" (UID: "ed95f649-6925-4589-b1fe-90d10d2b266a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.414773 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed95f649-6925-4589-b1fe-90d10d2b266a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ed95f649-6925-4589-b1fe-90d10d2b266a" (UID: "ed95f649-6925-4589-b1fe-90d10d2b266a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.416374 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed95f649-6925-4589-b1fe-90d10d2b266a-kube-api-access-mrcjh" (OuterVolumeSpecName: "kube-api-access-mrcjh") pod "ed95f649-6925-4589-b1fe-90d10d2b266a" (UID: "ed95f649-6925-4589-b1fe-90d10d2b266a"). InnerVolumeSpecName "kube-api-access-mrcjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.420198 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrcjh\" (UniqueName: \"kubernetes.io/projected/ed95f649-6925-4589-b1fe-90d10d2b266a-kube-api-access-mrcjh\") pod \"ed95f649-6925-4589-b1fe-90d10d2b266a\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.420258 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-config\") pod \"ed95f649-6925-4589-b1fe-90d10d2b266a\" (UID: \"ed95f649-6925-4589-b1fe-90d10d2b266a\") " Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.420775 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrcjh\" (UniqueName: \"kubernetes.io/projected/ed95f649-6925-4589-b1fe-90d10d2b266a-kube-api-access-mrcjh\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.420794 4750 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.420804 4750 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.420812 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed95f649-6925-4589-b1fe-90d10d2b266a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.421253 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-config" (OuterVolumeSpecName: "config") pod "ed95f649-6925-4589-b1fe-90d10d2b266a" (UID: "ed95f649-6925-4589-b1fe-90d10d2b266a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.437938 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.522053 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed95f649-6925-4589-b1fe-90d10d2b266a-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.623101 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-client-ca\") pod \"81c0849a-dd49-44b8-a94d-ad1138ab0246\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.623191 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-config\") pod \"81c0849a-dd49-44b8-a94d-ad1138ab0246\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.623263 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rrrq\" (UniqueName: \"kubernetes.io/projected/81c0849a-dd49-44b8-a94d-ad1138ab0246-kube-api-access-9rrrq\") pod \"81c0849a-dd49-44b8-a94d-ad1138ab0246\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.623311 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c0849a-dd49-44b8-a94d-ad1138ab0246-serving-cert\") pod \"81c0849a-dd49-44b8-a94d-ad1138ab0246\" (UID: \"81c0849a-dd49-44b8-a94d-ad1138ab0246\") " Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.623886 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-client-ca" (OuterVolumeSpecName: "client-ca") pod "81c0849a-dd49-44b8-a94d-ad1138ab0246" (UID: "81c0849a-dd49-44b8-a94d-ad1138ab0246"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.624222 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-config" (OuterVolumeSpecName: "config") pod "81c0849a-dd49-44b8-a94d-ad1138ab0246" (UID: "81c0849a-dd49-44b8-a94d-ad1138ab0246"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.628057 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81c0849a-dd49-44b8-a94d-ad1138ab0246-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81c0849a-dd49-44b8-a94d-ad1138ab0246" (UID: "81c0849a-dd49-44b8-a94d-ad1138ab0246"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.628458 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c0849a-dd49-44b8-a94d-ad1138ab0246-kube-api-access-9rrrq" (OuterVolumeSpecName: "kube-api-access-9rrrq") pod "81c0849a-dd49-44b8-a94d-ad1138ab0246" (UID: "81c0849a-dd49-44b8-a94d-ad1138ab0246"). InnerVolumeSpecName "kube-api-access-9rrrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.725689 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c0849a-dd49-44b8-a94d-ad1138ab0246-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.725735 4750 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.725747 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c0849a-dd49-44b8-a94d-ad1138ab0246-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:58 crc kubenswrapper[4750]: I0214 13:57:58.725760 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rrrq\" (UniqueName: \"kubernetes.io/projected/81c0849a-dd49-44b8-a94d-ad1138ab0246-kube-api-access-9rrrq\") on node \"crc\" DevicePath \"\"" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.028494 4750 generic.go:334] "Generic (PLEG): container finished" podID="ed95f649-6925-4589-b1fe-90d10d2b266a" containerID="08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d" exitCode=0 Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.028609 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.028602 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" event={"ID":"ed95f649-6925-4589-b1fe-90d10d2b266a","Type":"ContainerDied","Data":"08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d"} Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.028701 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ktrwb" event={"ID":"ed95f649-6925-4589-b1fe-90d10d2b266a","Type":"ContainerDied","Data":"de1aed8744bb745d74d66c6c0d544823bb64710b21f6c35b4af2549c892e6177"} Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.028723 4750 scope.go:117] "RemoveContainer" containerID="08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.030658 4750 generic.go:334] "Generic (PLEG): container finished" podID="81c0849a-dd49-44b8-a94d-ad1138ab0246" containerID="bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed" exitCode=0 Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.030946 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" event={"ID":"81c0849a-dd49-44b8-a94d-ad1138ab0246","Type":"ContainerDied","Data":"bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed"} Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.030967 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" event={"ID":"81c0849a-dd49-44b8-a94d-ad1138ab0246","Type":"ContainerDied","Data":"50af079416bb9e1a9d8637d3588723e9757322958cbf855c61b4a2c3a8a590cd"} Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.031022 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.055984 4750 scope.go:117] "RemoveContainer" containerID="08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d" Feb 14 13:57:59 crc kubenswrapper[4750]: E0214 13:57:59.056856 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d\": container with ID starting with 08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d not found: ID does not exist" containerID="08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.056946 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d"} err="failed to get container status \"08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d\": rpc error: code = NotFound desc = could not find container \"08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d\": container with ID starting with 08db3324ad09460c814ddefe950472abb7b7b722b05f6621ab8d4a02fe154d1d not found: ID does not exist" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.057015 4750 scope.go:117] "RemoveContainer" containerID="bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.061229 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ktrwb"] Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.078265 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ktrwb"] Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.089267 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9"] Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.096140 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwht9"] Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.100104 4750 scope.go:117] "RemoveContainer" containerID="bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed" Feb 14 13:57:59 crc kubenswrapper[4750]: E0214 13:57:59.101570 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed\": container with ID starting with bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed not found: ID does not exist" containerID="bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.101644 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed"} err="failed to get container status \"bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed\": rpc error: code = NotFound desc = could not find container \"bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed\": container with ID starting with bed13f7135a35f192292570a76161e0e19aef41445d185b1739cb30e78ec00ed not found: ID does not exist" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.837345 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98"] Feb 14 13:57:59 crc kubenswrapper[4750]: E0214 13:57:59.837595 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerName="registry-server" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.837610 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerName="registry-server" Feb 14 13:57:59 crc kubenswrapper[4750]: E0214 13:57:59.837624 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerName="extract-content" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.837632 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerName="extract-content" Feb 14 13:57:59 crc kubenswrapper[4750]: E0214 13:57:59.837644 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed95f649-6925-4589-b1fe-90d10d2b266a" containerName="controller-manager" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.837653 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed95f649-6925-4589-b1fe-90d10d2b266a" containerName="controller-manager" Feb 14 13:57:59 crc kubenswrapper[4750]: E0214 13:57:59.837666 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerName="extract-utilities" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.837674 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerName="extract-utilities" Feb 14 13:57:59 crc kubenswrapper[4750]: E0214 13:57:59.837690 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c0849a-dd49-44b8-a94d-ad1138ab0246" containerName="route-controller-manager" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.837698 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c0849a-dd49-44b8-a94d-ad1138ab0246" containerName="route-controller-manager" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.837801 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed95f649-6925-4589-b1fe-90d10d2b266a" containerName="controller-manager" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.837813 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c0849a-dd49-44b8-a94d-ad1138ab0246" containerName="route-controller-manager" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.837830 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f741224-7534-4c4b-be4c-1e91015d0a37" containerName="registry-server" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.838261 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.843807 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.845598 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.845635 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.847318 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.847405 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.847694 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd"] Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.847984 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.848505 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.852447 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.853057 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.853263 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.853411 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.853441 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.853676 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.859656 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98"] Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.861308 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.868793 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd"] Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.941920 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-config\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.942055 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9668da-cfcd-4d28-8121-260f5799e964-serving-cert\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.942092 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-proxy-ca-bundles\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.942203 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-client-ca\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.942328 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-client-ca\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.942513 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hlr\" (UniqueName: \"kubernetes.io/projected/3eeb0647-10c7-4f04-9c5a-149a905897a2-kube-api-access-j6hlr\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.942615 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eeb0647-10c7-4f04-9c5a-149a905897a2-serving-cert\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.942638 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-config\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:57:59 crc kubenswrapper[4750]: I0214 13:57:59.942678 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kp8r\" (UniqueName: \"kubernetes.io/projected/eb9668da-cfcd-4d28-8121-260f5799e964-kube-api-access-8kp8r\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.044925 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kp8r\" (UniqueName: \"kubernetes.io/projected/eb9668da-cfcd-4d28-8121-260f5799e964-kube-api-access-8kp8r\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.045014 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-config\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.045051 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9668da-cfcd-4d28-8121-260f5799e964-serving-cert\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.045071 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-proxy-ca-bundles\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.045099 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-client-ca\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.045164 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-client-ca\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.045193 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6hlr\" (UniqueName: \"kubernetes.io/projected/3eeb0647-10c7-4f04-9c5a-149a905897a2-kube-api-access-j6hlr\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.045230 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eeb0647-10c7-4f04-9c5a-149a905897a2-serving-cert\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.045250 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-config\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.046946 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-client-ca\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.047252 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-client-ca\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.047268 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-config\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.047476 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-config\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.047869 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-proxy-ca-bundles\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.050131 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eeb0647-10c7-4f04-9c5a-149a905897a2-serving-cert\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.064695 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9668da-cfcd-4d28-8121-260f5799e964-serving-cert\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.069153 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6hlr\" (UniqueName: \"kubernetes.io/projected/3eeb0647-10c7-4f04-9c5a-149a905897a2-kube-api-access-j6hlr\") pod \"controller-manager-f5fc4c9bf-jpt98\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.083317 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kp8r\" (UniqueName: \"kubernetes.io/projected/eb9668da-cfcd-4d28-8121-260f5799e964-kube-api-access-8kp8r\") pod \"route-controller-manager-5b4b789675-m2tmd\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.167558 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.181541 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.413348 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98"] Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.461809 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd"] Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.749012 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c0849a-dd49-44b8-a94d-ad1138ab0246" path="/var/lib/kubelet/pods/81c0849a-dd49-44b8-a94d-ad1138ab0246/volumes" Feb 14 13:58:00 crc kubenswrapper[4750]: I0214 13:58:00.749912 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed95f649-6925-4589-b1fe-90d10d2b266a" path="/var/lib/kubelet/pods/ed95f649-6925-4589-b1fe-90d10d2b266a/volumes" Feb 14 13:58:01 crc kubenswrapper[4750]: I0214 13:58:01.046074 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" event={"ID":"eb9668da-cfcd-4d28-8121-260f5799e964","Type":"ContainerStarted","Data":"890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330"} Feb 14 13:58:01 crc kubenswrapper[4750]: I0214 13:58:01.046984 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" event={"ID":"eb9668da-cfcd-4d28-8121-260f5799e964","Type":"ContainerStarted","Data":"391373ad748790785a12860eeb475f8340f783f4ef043442482196d4428cec0a"} Feb 14 13:58:01 crc kubenswrapper[4750]: I0214 13:58:01.049091 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:01 crc kubenswrapper[4750]: I0214 13:58:01.051002 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" event={"ID":"3eeb0647-10c7-4f04-9c5a-149a905897a2","Type":"ContainerStarted","Data":"59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221"} Feb 14 13:58:01 crc kubenswrapper[4750]: I0214 13:58:01.051046 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" event={"ID":"3eeb0647-10c7-4f04-9c5a-149a905897a2","Type":"ContainerStarted","Data":"75b76e3286996a9a9be7770b17a665dc21769614e05c6ae0e9ed58fdd8387370"} Feb 14 13:58:01 crc kubenswrapper[4750]: I0214 13:58:01.053309 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:01 crc kubenswrapper[4750]: I0214 13:58:01.070448 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" podStartSLOduration=3.070433995 podStartE2EDuration="3.070433995s" podCreationTimestamp="2026-02-14 13:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:58:01.067409629 +0000 UTC m=+353.093399110" watchObservedRunningTime="2026-02-14 13:58:01.070433995 +0000 UTC m=+353.096423476" Feb 14 13:58:01 crc kubenswrapper[4750]: I0214 13:58:01.084844 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" podStartSLOduration=4.084827486 podStartE2EDuration="4.084827486s" podCreationTimestamp="2026-02-14 13:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:58:01.083310783 +0000 UTC m=+353.109300264" watchObservedRunningTime="2026-02-14 13:58:01.084827486 +0000 UTC m=+353.110816967" Feb 14 13:58:02 crc kubenswrapper[4750]: I0214 13:58:02.058211 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:02 crc kubenswrapper[4750]: I0214 13:58:02.064865 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:03 crc kubenswrapper[4750]: I0214 13:58:03.522820 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98"] Feb 14 13:58:03 crc kubenswrapper[4750]: I0214 13:58:03.529042 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd"] Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.069319 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" podUID="eb9668da-cfcd-4d28-8121-260f5799e964" containerName="route-controller-manager" containerID="cri-o://890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330" gracePeriod=30 Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.482730 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.616687 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9668da-cfcd-4d28-8121-260f5799e964-serving-cert\") pod \"eb9668da-cfcd-4d28-8121-260f5799e964\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.616746 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kp8r\" (UniqueName: \"kubernetes.io/projected/eb9668da-cfcd-4d28-8121-260f5799e964-kube-api-access-8kp8r\") pod \"eb9668da-cfcd-4d28-8121-260f5799e964\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.616772 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-client-ca\") pod \"eb9668da-cfcd-4d28-8121-260f5799e964\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.616806 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-config\") pod \"eb9668da-cfcd-4d28-8121-260f5799e964\" (UID: \"eb9668da-cfcd-4d28-8121-260f5799e964\") " Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.617707 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-config" (OuterVolumeSpecName: "config") pod "eb9668da-cfcd-4d28-8121-260f5799e964" (UID: "eb9668da-cfcd-4d28-8121-260f5799e964"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.617750 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb9668da-cfcd-4d28-8121-260f5799e964" (UID: "eb9668da-cfcd-4d28-8121-260f5799e964"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.621753 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9668da-cfcd-4d28-8121-260f5799e964-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb9668da-cfcd-4d28-8121-260f5799e964" (UID: "eb9668da-cfcd-4d28-8121-260f5799e964"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.623598 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9668da-cfcd-4d28-8121-260f5799e964-kube-api-access-8kp8r" (OuterVolumeSpecName: "kube-api-access-8kp8r") pod "eb9668da-cfcd-4d28-8121-260f5799e964" (UID: "eb9668da-cfcd-4d28-8121-260f5799e964"). InnerVolumeSpecName "kube-api-access-8kp8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.718274 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9668da-cfcd-4d28-8121-260f5799e964-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.718321 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kp8r\" (UniqueName: \"kubernetes.io/projected/eb9668da-cfcd-4d28-8121-260f5799e964-kube-api-access-8kp8r\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.718335 4750 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.718347 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9668da-cfcd-4d28-8121-260f5799e964-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.843308 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd"] Feb 14 13:58:04 crc kubenswrapper[4750]: E0214 13:58:04.843619 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9668da-cfcd-4d28-8121-260f5799e964" containerName="route-controller-manager" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.843642 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9668da-cfcd-4d28-8121-260f5799e964" containerName="route-controller-manager" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.843780 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9668da-cfcd-4d28-8121-260f5799e964" containerName="route-controller-manager" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.844488 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:04 crc kubenswrapper[4750]: I0214 13:58:04.854029 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd"] Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.021950 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vcp\" (UniqueName: \"kubernetes.io/projected/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-kube-api-access-z7vcp\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.022030 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-client-ca\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.022074 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-config\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.022242 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-serving-cert\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.077683 4750 generic.go:334] "Generic (PLEG): container finished" podID="eb9668da-cfcd-4d28-8121-260f5799e964" containerID="890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330" exitCode=0 Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.077761 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.077775 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" event={"ID":"eb9668da-cfcd-4d28-8121-260f5799e964","Type":"ContainerDied","Data":"890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330"} Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.077873 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd" event={"ID":"eb9668da-cfcd-4d28-8121-260f5799e964","Type":"ContainerDied","Data":"391373ad748790785a12860eeb475f8340f783f4ef043442482196d4428cec0a"} Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.077893 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" podUID="3eeb0647-10c7-4f04-9c5a-149a905897a2" containerName="controller-manager" containerID="cri-o://59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221" gracePeriod=30 Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.077907 4750 scope.go:117] "RemoveContainer" containerID="890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.111633 4750 scope.go:117] "RemoveContainer" containerID="890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.114321 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd"] Feb 14 13:58:05 crc kubenswrapper[4750]: E0214 13:58:05.114528 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330\": container with ID starting with 890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330 not found: ID does not exist" containerID="890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.114595 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330"} err="failed to get container status \"890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330\": rpc error: code = NotFound desc = could not find container \"890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330\": container with ID starting with 890d711514e7aaa9a1036dd2beb32bc2c42e174e05b366ade9a4643e5fba4330 not found: ID does not exist" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.118973 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b4b789675-m2tmd"] Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.123512 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-client-ca\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.123576 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-config\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.123650 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-serving-cert\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.124616 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vcp\" (UniqueName: \"kubernetes.io/projected/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-kube-api-access-z7vcp\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.124923 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-client-ca\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.125564 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-config\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.135039 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-serving-cert\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.143911 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vcp\" (UniqueName: \"kubernetes.io/projected/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-kube-api-access-z7vcp\") pod \"route-controller-manager-55c4bd5b98-xtqfd\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.176490 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.510728 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd"] Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.665896 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.736767 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-proxy-ca-bundles\") pod \"3eeb0647-10c7-4f04-9c5a-149a905897a2\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.736847 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-config\") pod \"3eeb0647-10c7-4f04-9c5a-149a905897a2\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.736881 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6hlr\" (UniqueName: \"kubernetes.io/projected/3eeb0647-10c7-4f04-9c5a-149a905897a2-kube-api-access-j6hlr\") pod \"3eeb0647-10c7-4f04-9c5a-149a905897a2\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.736959 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-client-ca\") pod \"3eeb0647-10c7-4f04-9c5a-149a905897a2\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.736982 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eeb0647-10c7-4f04-9c5a-149a905897a2-serving-cert\") pod \"3eeb0647-10c7-4f04-9c5a-149a905897a2\" (UID: \"3eeb0647-10c7-4f04-9c5a-149a905897a2\") " Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.737729 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "3eeb0647-10c7-4f04-9c5a-149a905897a2" (UID: "3eeb0647-10c7-4f04-9c5a-149a905897a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.737833 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3eeb0647-10c7-4f04-9c5a-149a905897a2" (UID: "3eeb0647-10c7-4f04-9c5a-149a905897a2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.737968 4750 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.737981 4750 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.738735 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-config" (OuterVolumeSpecName: "config") pod "3eeb0647-10c7-4f04-9c5a-149a905897a2" (UID: "3eeb0647-10c7-4f04-9c5a-149a905897a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.741310 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eeb0647-10c7-4f04-9c5a-149a905897a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3eeb0647-10c7-4f04-9c5a-149a905897a2" (UID: "3eeb0647-10c7-4f04-9c5a-149a905897a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.741723 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eeb0647-10c7-4f04-9c5a-149a905897a2-kube-api-access-j6hlr" (OuterVolumeSpecName: "kube-api-access-j6hlr") pod "3eeb0647-10c7-4f04-9c5a-149a905897a2" (UID: "3eeb0647-10c7-4f04-9c5a-149a905897a2"). InnerVolumeSpecName "kube-api-access-j6hlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.839080 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeb0647-10c7-4f04-9c5a-149a905897a2-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.839153 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6hlr\" (UniqueName: \"kubernetes.io/projected/3eeb0647-10c7-4f04-9c5a-149a905897a2-kube-api-access-j6hlr\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:05 crc kubenswrapper[4750]: I0214 13:58:05.839174 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eeb0647-10c7-4f04-9c5a-149a905897a2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.088378 4750 generic.go:334] "Generic (PLEG): container finished" podID="3eeb0647-10c7-4f04-9c5a-149a905897a2" containerID="59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221" exitCode=0 Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.088435 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" event={"ID":"3eeb0647-10c7-4f04-9c5a-149a905897a2","Type":"ContainerDied","Data":"59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221"} Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.088457 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" event={"ID":"3eeb0647-10c7-4f04-9c5a-149a905897a2","Type":"ContainerDied","Data":"75b76e3286996a9a9be7770b17a665dc21769614e05c6ae0e9ed58fdd8387370"} Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.088474 4750 scope.go:117] "RemoveContainer" containerID="59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.088644 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.093500 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" event={"ID":"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44","Type":"ContainerStarted","Data":"04850c9681a395fbb7c0ab3cba3ad234feabf3202bc835aa323834950ebed7b6"} Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.094477 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" event={"ID":"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44","Type":"ContainerStarted","Data":"b6c6cbd1e1f1b14e40d5f4fd6cfaae3980730762d4cc9163c0bea436addce6c8"} Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.094636 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.107305 4750 scope.go:117] "RemoveContainer" containerID="59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221" Feb 14 13:58:06 crc kubenswrapper[4750]: E0214 13:58:06.108102 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221\": container with ID starting with 59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221 not found: ID does not exist" containerID="59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.108218 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221"} err="failed to get container status \"59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221\": rpc error: code = NotFound desc = could not find container \"59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221\": container with ID starting with 59209ea4ec5b95431483d2f150be3349ade5c2a964ce8e9705696021f7b06221 not found: ID does not exist" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.123861 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" podStartSLOduration=3.123830766 podStartE2EDuration="3.123830766s" podCreationTimestamp="2026-02-14 13:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:58:06.122600591 +0000 UTC m=+358.148590112" watchObservedRunningTime="2026-02-14 13:58:06.123830766 +0000 UTC m=+358.149820287" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.140803 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98"] Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.146058 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5fc4c9bf-jpt98"] Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.304307 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.754493 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eeb0647-10c7-4f04-9c5a-149a905897a2" path="/var/lib/kubelet/pods/3eeb0647-10c7-4f04-9c5a-149a905897a2/volumes" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.756309 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9668da-cfcd-4d28-8121-260f5799e964" path="/var/lib/kubelet/pods/eb9668da-cfcd-4d28-8121-260f5799e964/volumes" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.843073 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc"] Feb 14 13:58:06 crc kubenswrapper[4750]: E0214 13:58:06.843539 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeb0647-10c7-4f04-9c5a-149a905897a2" containerName="controller-manager" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.843568 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeb0647-10c7-4f04-9c5a-149a905897a2" containerName="controller-manager" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.843789 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeb0647-10c7-4f04-9c5a-149a905897a2" containerName="controller-manager" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.844585 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.848398 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.848948 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.849307 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.849580 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.853065 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.853650 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.866216 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.878550 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc"] Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.952856 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-serving-cert\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.952928 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6gm\" (UniqueName: \"kubernetes.io/projected/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-kube-api-access-7k6gm\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.953523 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-client-ca\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.953640 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-proxy-ca-bundles\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:06 crc kubenswrapper[4750]: I0214 13:58:06.953759 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-config\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.056296 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-config\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.056417 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-serving-cert\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.056514 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k6gm\" (UniqueName: \"kubernetes.io/projected/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-kube-api-access-7k6gm\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.056579 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-client-ca\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.056641 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-proxy-ca-bundles\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.058879 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-client-ca\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.059385 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-proxy-ca-bundles\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.061560 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-config\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.062871 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-serving-cert\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.080096 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k6gm\" (UniqueName: \"kubernetes.io/projected/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-kube-api-access-7k6gm\") pod \"controller-manager-7f9cb9fc6d-vx4xc\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.169015 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:07 crc kubenswrapper[4750]: I0214 13:58:07.601074 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc"] Feb 14 13:58:08 crc kubenswrapper[4750]: I0214 13:58:08.108045 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" event={"ID":"b0905dd0-8c41-4d47-ac02-06c5baf79d7e","Type":"ContainerStarted","Data":"46e1fc2e74bb217bb30d92506c21bb156bc9c23b1e76943935aaf46e1ffdf086"} Feb 14 13:58:08 crc kubenswrapper[4750]: I0214 13:58:08.108105 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" event={"ID":"b0905dd0-8c41-4d47-ac02-06c5baf79d7e","Type":"ContainerStarted","Data":"419baed1b9df02ccec8b2ea01dd3a69d0cf20c7727ca309f75f286ac2f490cc1"} Feb 14 13:58:08 crc kubenswrapper[4750]: I0214 13:58:08.128670 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" podStartSLOduration=5.128642792 podStartE2EDuration="5.128642792s" podCreationTimestamp="2026-02-14 13:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:58:08.123813304 +0000 UTC m=+360.149802795" watchObservedRunningTime="2026-02-14 13:58:08.128642792 +0000 UTC m=+360.154632313" Feb 14 13:58:09 crc kubenswrapper[4750]: I0214 13:58:09.113805 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:09 crc kubenswrapper[4750]: I0214 13:58:09.122421 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.260148 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-59rbx"] Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.261344 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.286549 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-59rbx"] Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.440174 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.440546 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f46a0c2-8611-44cb-ac45-761b6030f9d1-registry-tls\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.440585 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f46a0c2-8611-44cb-ac45-761b6030f9d1-bound-sa-token\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.440608 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f46a0c2-8611-44cb-ac45-761b6030f9d1-registry-certificates\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.440633 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f46a0c2-8611-44cb-ac45-761b6030f9d1-trusted-ca\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.440665 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f46a0c2-8611-44cb-ac45-761b6030f9d1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.440787 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwb7j\" (UniqueName: \"kubernetes.io/projected/1f46a0c2-8611-44cb-ac45-761b6030f9d1-kube-api-access-fwb7j\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.440862 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f46a0c2-8611-44cb-ac45-761b6030f9d1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.460888 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.542555 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwb7j\" (UniqueName: \"kubernetes.io/projected/1f46a0c2-8611-44cb-ac45-761b6030f9d1-kube-api-access-fwb7j\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.543474 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f46a0c2-8611-44cb-ac45-761b6030f9d1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.545490 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f46a0c2-8611-44cb-ac45-761b6030f9d1-registry-tls\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.545557 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f46a0c2-8611-44cb-ac45-761b6030f9d1-bound-sa-token\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.545587 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f46a0c2-8611-44cb-ac45-761b6030f9d1-registry-certificates\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.545614 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f46a0c2-8611-44cb-ac45-761b6030f9d1-trusted-ca\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.545661 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f46a0c2-8611-44cb-ac45-761b6030f9d1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.547085 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f46a0c2-8611-44cb-ac45-761b6030f9d1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.550986 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f46a0c2-8611-44cb-ac45-761b6030f9d1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.551684 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f46a0c2-8611-44cb-ac45-761b6030f9d1-registry-certificates\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.557148 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f46a0c2-8611-44cb-ac45-761b6030f9d1-trusted-ca\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.560531 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f46a0c2-8611-44cb-ac45-761b6030f9d1-registry-tls\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.571673 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwb7j\" (UniqueName: \"kubernetes.io/projected/1f46a0c2-8611-44cb-ac45-761b6030f9d1-kube-api-access-fwb7j\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.585164 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f46a0c2-8611-44cb-ac45-761b6030f9d1-bound-sa-token\") pod \"image-registry-66df7c8f76-59rbx\" (UID: \"1f46a0c2-8611-44cb-ac45-761b6030f9d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:20 crc kubenswrapper[4750]: I0214 13:58:20.878760 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:21 crc kubenswrapper[4750]: I0214 13:58:21.369280 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-59rbx"] Feb 14 13:58:21 crc kubenswrapper[4750]: W0214 13:58:21.378881 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f46a0c2_8611_44cb_ac45_761b6030f9d1.slice/crio-0299201c33632402f086e53d5c049695b9e2dbba75e1c38132a2e0320fe699f1 WatchSource:0}: Error finding container 0299201c33632402f086e53d5c049695b9e2dbba75e1c38132a2e0320fe699f1: Status 404 returned error can't find the container with id 0299201c33632402f086e53d5c049695b9e2dbba75e1c38132a2e0320fe699f1 Feb 14 13:58:22 crc kubenswrapper[4750]: I0214 13:58:22.193525 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" event={"ID":"1f46a0c2-8611-44cb-ac45-761b6030f9d1","Type":"ContainerStarted","Data":"b21999f49433210cf179538f039c170f77e9cbf1c5f324e53c2baff61e4878c8"} Feb 14 13:58:22 crc kubenswrapper[4750]: I0214 13:58:22.193994 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:22 crc kubenswrapper[4750]: I0214 13:58:22.194015 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" event={"ID":"1f46a0c2-8611-44cb-ac45-761b6030f9d1","Type":"ContainerStarted","Data":"0299201c33632402f086e53d5c049695b9e2dbba75e1c38132a2e0320fe699f1"} Feb 14 13:58:30 crc kubenswrapper[4750]: I0214 13:58:30.128767 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 13:58:30 crc kubenswrapper[4750]: I0214 13:58:30.129311 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 13:58:37 crc kubenswrapper[4750]: I0214 13:58:37.957391 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" podStartSLOduration=17.957366758 podStartE2EDuration="17.957366758s" podCreationTimestamp="2026-02-14 13:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:58:22.232385981 +0000 UTC m=+374.258375522" watchObservedRunningTime="2026-02-14 13:58:37.957366758 +0000 UTC m=+389.983356249" Feb 14 13:58:37 crc kubenswrapper[4750]: I0214 13:58:37.960996 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc"] Feb 14 13:58:37 crc kubenswrapper[4750]: I0214 13:58:37.961297 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" podUID="b0905dd0-8c41-4d47-ac02-06c5baf79d7e" containerName="controller-manager" containerID="cri-o://46e1fc2e74bb217bb30d92506c21bb156bc9c23b1e76943935aaf46e1ffdf086" gracePeriod=30 Feb 14 13:58:37 crc kubenswrapper[4750]: I0214 13:58:37.976947 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd"] Feb 14 13:58:37 crc kubenswrapper[4750]: I0214 13:58:37.977149 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" podUID="23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" containerName="route-controller-manager" containerID="cri-o://04850c9681a395fbb7c0ab3cba3ad234feabf3202bc835aa323834950ebed7b6" gracePeriod=30 Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.295871 4750 generic.go:334] "Generic (PLEG): container finished" podID="b0905dd0-8c41-4d47-ac02-06c5baf79d7e" containerID="46e1fc2e74bb217bb30d92506c21bb156bc9c23b1e76943935aaf46e1ffdf086" exitCode=0 Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.296168 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" event={"ID":"b0905dd0-8c41-4d47-ac02-06c5baf79d7e","Type":"ContainerDied","Data":"46e1fc2e74bb217bb30d92506c21bb156bc9c23b1e76943935aaf46e1ffdf086"} Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.302501 4750 generic.go:334] "Generic (PLEG): container finished" podID="23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" containerID="04850c9681a395fbb7c0ab3cba3ad234feabf3202bc835aa323834950ebed7b6" exitCode=0 Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.302548 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" event={"ID":"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44","Type":"ContainerDied","Data":"04850c9681a395fbb7c0ab3cba3ad234feabf3202bc835aa323834950ebed7b6"} Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.537692 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.541375 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.625767 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-config\") pod \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.626701 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-config" (OuterVolumeSpecName: "config") pod "b0905dd0-8c41-4d47-ac02-06c5baf79d7e" (UID: "b0905dd0-8c41-4d47-ac02-06c5baf79d7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.726650 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-client-ca\") pod \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.726733 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-proxy-ca-bundles\") pod \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.726843 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-serving-cert\") pod \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.726899 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k6gm\" (UniqueName: \"kubernetes.io/projected/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-kube-api-access-7k6gm\") pod \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\" (UID: \"b0905dd0-8c41-4d47-ac02-06c5baf79d7e\") " Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.727035 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-serving-cert\") pod \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.727085 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7vcp\" (UniqueName: \"kubernetes.io/projected/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-kube-api-access-z7vcp\") pod \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.727195 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-config\") pod \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.727309 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-client-ca\") pod \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\" (UID: \"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44\") " Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.727452 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0905dd0-8c41-4d47-ac02-06c5baf79d7e" (UID: "b0905dd0-8c41-4d47-ac02-06c5baf79d7e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.727663 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b0905dd0-8c41-4d47-ac02-06c5baf79d7e" (UID: "b0905dd0-8c41-4d47-ac02-06c5baf79d7e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.727912 4750 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.727970 4750 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.728001 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.728506 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-config" (OuterVolumeSpecName: "config") pod "23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" (UID: "23108c6e-bb37-4d3f-8f66-8af6c6b2fc44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.728726 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-client-ca" (OuterVolumeSpecName: "client-ca") pod "23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" (UID: "23108c6e-bb37-4d3f-8f66-8af6c6b2fc44"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.733001 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" (UID: "23108c6e-bb37-4d3f-8f66-8af6c6b2fc44"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.732988 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-kube-api-access-7k6gm" (OuterVolumeSpecName: "kube-api-access-7k6gm") pod "b0905dd0-8c41-4d47-ac02-06c5baf79d7e" (UID: "b0905dd0-8c41-4d47-ac02-06c5baf79d7e"). InnerVolumeSpecName "kube-api-access-7k6gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.733759 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0905dd0-8c41-4d47-ac02-06c5baf79d7e" (UID: "b0905dd0-8c41-4d47-ac02-06c5baf79d7e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.733942 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-kube-api-access-z7vcp" (OuterVolumeSpecName: "kube-api-access-z7vcp") pod "23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" (UID: "23108c6e-bb37-4d3f-8f66-8af6c6b2fc44"). InnerVolumeSpecName "kube-api-access-z7vcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.828496 4750 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.828539 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.828555 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k6gm\" (UniqueName: \"kubernetes.io/projected/b0905dd0-8c41-4d47-ac02-06c5baf79d7e-kube-api-access-7k6gm\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.828570 4750 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.828585 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7vcp\" (UniqueName: \"kubernetes.io/projected/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-kube-api-access-z7vcp\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:38 crc kubenswrapper[4750]: I0214 13:58:38.828597 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44-config\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.311040 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" event={"ID":"b0905dd0-8c41-4d47-ac02-06c5baf79d7e","Type":"ContainerDied","Data":"419baed1b9df02ccec8b2ea01dd3a69d0cf20c7727ca309f75f286ac2f490cc1"} Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.311130 4750 scope.go:117] "RemoveContainer" containerID="46e1fc2e74bb217bb30d92506c21bb156bc9c23b1e76943935aaf46e1ffdf086" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.311053 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.315456 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" event={"ID":"23108c6e-bb37-4d3f-8f66-8af6c6b2fc44","Type":"ContainerDied","Data":"b6c6cbd1e1f1b14e40d5f4fd6cfaae3980730762d4cc9163c0bea436addce6c8"} Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.315511 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.339245 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc"] Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.343995 4750 scope.go:117] "RemoveContainer" containerID="04850c9681a395fbb7c0ab3cba3ad234feabf3202bc835aa323834950ebed7b6" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.345305 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f9cb9fc6d-vx4xc"] Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.350311 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd"] Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.364332 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4bd5b98-xtqfd"] Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.866096 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58f787f9d8-xz2cd"] Feb 14 13:58:39 crc kubenswrapper[4750]: E0214 13:58:39.866454 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" containerName="route-controller-manager" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.866471 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" containerName="route-controller-manager" Feb 14 13:58:39 crc kubenswrapper[4750]: E0214 13:58:39.866499 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0905dd0-8c41-4d47-ac02-06c5baf79d7e" containerName="controller-manager" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.866508 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0905dd0-8c41-4d47-ac02-06c5baf79d7e" containerName="controller-manager" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.866619 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" containerName="route-controller-manager" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.866638 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0905dd0-8c41-4d47-ac02-06c5baf79d7e" containerName="controller-manager" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.867080 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.870432 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.870558 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.870980 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.871010 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.870982 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.871166 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.872193 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q"] Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.872866 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.874241 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.876422 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.876637 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.876793 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.876923 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.880061 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.882936 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.911283 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q"] Feb 14 13:58:39 crc kubenswrapper[4750]: I0214 13:58:39.918715 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58f787f9d8-xz2cd"] Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.042515 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54121bf5-0363-4aa4-bc39-090c9592816d-config\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.042551 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f297d\" (UniqueName: \"kubernetes.io/projected/54121bf5-0363-4aa4-bc39-090c9592816d-kube-api-access-f297d\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.042585 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8zcv\" (UniqueName: \"kubernetes.io/projected/a1edc045-4fe5-48b5-88be-3e5d5694c151-kube-api-access-q8zcv\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.042605 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1edc045-4fe5-48b5-88be-3e5d5694c151-client-ca\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.042622 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1edc045-4fe5-48b5-88be-3e5d5694c151-serving-cert\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.042641 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54121bf5-0363-4aa4-bc39-090c9592816d-client-ca\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.042657 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1edc045-4fe5-48b5-88be-3e5d5694c151-config\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.042748 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54121bf5-0363-4aa4-bc39-090c9592816d-proxy-ca-bundles\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.042991 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54121bf5-0363-4aa4-bc39-090c9592816d-serving-cert\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.144012 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54121bf5-0363-4aa4-bc39-090c9592816d-serving-cert\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.144151 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54121bf5-0363-4aa4-bc39-090c9592816d-config\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.144188 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f297d\" (UniqueName: \"kubernetes.io/projected/54121bf5-0363-4aa4-bc39-090c9592816d-kube-api-access-f297d\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.144254 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8zcv\" (UniqueName: \"kubernetes.io/projected/a1edc045-4fe5-48b5-88be-3e5d5694c151-kube-api-access-q8zcv\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.144298 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1edc045-4fe5-48b5-88be-3e5d5694c151-client-ca\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.144335 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1edc045-4fe5-48b5-88be-3e5d5694c151-serving-cert\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.144377 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54121bf5-0363-4aa4-bc39-090c9592816d-client-ca\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.144414 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1edc045-4fe5-48b5-88be-3e5d5694c151-config\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.144454 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54121bf5-0363-4aa4-bc39-090c9592816d-proxy-ca-bundles\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.145976 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54121bf5-0363-4aa4-bc39-090c9592816d-client-ca\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.146315 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1edc045-4fe5-48b5-88be-3e5d5694c151-client-ca\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.146359 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54121bf5-0363-4aa4-bc39-090c9592816d-proxy-ca-bundles\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.146472 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54121bf5-0363-4aa4-bc39-090c9592816d-config\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.148658 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1edc045-4fe5-48b5-88be-3e5d5694c151-config\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.150517 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54121bf5-0363-4aa4-bc39-090c9592816d-serving-cert\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.150658 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1edc045-4fe5-48b5-88be-3e5d5694c151-serving-cert\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.171945 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f297d\" (UniqueName: \"kubernetes.io/projected/54121bf5-0363-4aa4-bc39-090c9592816d-kube-api-access-f297d\") pod \"controller-manager-58f787f9d8-xz2cd\" (UID: \"54121bf5-0363-4aa4-bc39-090c9592816d\") " pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.177306 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8zcv\" (UniqueName: \"kubernetes.io/projected/a1edc045-4fe5-48b5-88be-3e5d5694c151-kube-api-access-q8zcv\") pod \"route-controller-manager-5cd566f588-mhd2q\" (UID: \"a1edc045-4fe5-48b5-88be-3e5d5694c151\") " pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.214009 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.225737 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.613795 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58f787f9d8-xz2cd"] Feb 14 13:58:40 crc kubenswrapper[4750]: W0214 13:58:40.621628 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54121bf5_0363_4aa4_bc39_090c9592816d.slice/crio-797da2ec97af622d8e77b8f50aa3c1fc3b355cec6b3a2106fe23da75e31b401c WatchSource:0}: Error finding container 797da2ec97af622d8e77b8f50aa3c1fc3b355cec6b3a2106fe23da75e31b401c: Status 404 returned error can't find the container with id 797da2ec97af622d8e77b8f50aa3c1fc3b355cec6b3a2106fe23da75e31b401c Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.728938 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q"] Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.750905 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23108c6e-bb37-4d3f-8f66-8af6c6b2fc44" path="/var/lib/kubelet/pods/23108c6e-bb37-4d3f-8f66-8af6c6b2fc44/volumes" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.751682 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0905dd0-8c41-4d47-ac02-06c5baf79d7e" path="/var/lib/kubelet/pods/b0905dd0-8c41-4d47-ac02-06c5baf79d7e/volumes" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.883852 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-59rbx" Feb 14 13:58:40 crc kubenswrapper[4750]: I0214 13:58:40.941494 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2zdhw"] Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.332597 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" event={"ID":"a1edc045-4fe5-48b5-88be-3e5d5694c151","Type":"ContainerStarted","Data":"250db150d4a0cd91b5888f762654c5c8d1056f6002075360324f61d182650ee5"} Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.332899 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" event={"ID":"a1edc045-4fe5-48b5-88be-3e5d5694c151","Type":"ContainerStarted","Data":"71ee75630a0f5d38bce3a5c06f48280aac0030f4ef2784f6c4d66908b45c5023"} Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.332921 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.333942 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" event={"ID":"54121bf5-0363-4aa4-bc39-090c9592816d","Type":"ContainerStarted","Data":"5fd53fe9a7aef052a60c6e637785ca8d6c54f0d0c1dc148958598c81699d12ab"} Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.333964 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" event={"ID":"54121bf5-0363-4aa4-bc39-090c9592816d","Type":"ContainerStarted","Data":"797da2ec97af622d8e77b8f50aa3c1fc3b355cec6b3a2106fe23da75e31b401c"} Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.334282 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.339164 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.341271 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.360264 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cd566f588-mhd2q" podStartSLOduration=3.360245978 podStartE2EDuration="3.360245978s" podCreationTimestamp="2026-02-14 13:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:58:41.357247697 +0000 UTC m=+393.383237178" watchObservedRunningTime="2026-02-14 13:58:41.360245978 +0000 UTC m=+393.386235459" Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.370833 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58f787f9d8-xz2cd" podStartSLOduration=4.370814253 podStartE2EDuration="4.370814253s" podCreationTimestamp="2026-02-14 13:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:58:41.370053573 +0000 UTC m=+393.396043054" watchObservedRunningTime="2026-02-14 13:58:41.370814253 +0000 UTC m=+393.396803734" Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.973959 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8hcx"] Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.974565 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w8hcx" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerName="registry-server" containerID="cri-o://6f15605753cef855d25442b204260d4dd140669bc217ca0eb4db0acffec38435" gracePeriod=30 Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.983131 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdd5l"] Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.983567 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sdd5l" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerName="registry-server" containerID="cri-o://bf4a2c153d9aef0a31213671bbd61ab3c28e40d4c685b42c92441e618ecf5cf0" gracePeriod=30 Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.996510 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49gzj"] Feb 14 13:58:41 crc kubenswrapper[4750]: I0214 13:58:41.996970 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" podUID="39156cc1-cba3-4fce-b877-82ee6ac6ce02" containerName="marketplace-operator" containerID="cri-o://041a8f7701cc78b4fc3d54bcac8d40d07b76a1c9c4b1c4c97d405e881eb9c818" gracePeriod=30 Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.002335 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vxzp"] Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.004615 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9vxzp" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerName="registry-server" containerID="cri-o://967681ad634c5bc057574db531f6490afda5f5f3688bf42472db576e2c7e8db8" gracePeriod=30 Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.023578 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hbtg9"] Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.024033 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hbtg9" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerName="registry-server" containerID="cri-o://4eb2d91110a213e73d7657b1b5e2ff5b8e7dfdced2b9adfebf33f59ce2732bc9" gracePeriod=30 Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.025420 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tdtms"] Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.026425 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.056104 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tdtms"] Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.171426 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b634ea23-ca70-446c-8a62-0910256d9025-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tdtms\" (UID: \"b634ea23-ca70-446c-8a62-0910256d9025\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.171468 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b634ea23-ca70-446c-8a62-0910256d9025-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tdtms\" (UID: \"b634ea23-ca70-446c-8a62-0910256d9025\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.171509 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52tjq\" (UniqueName: \"kubernetes.io/projected/b634ea23-ca70-446c-8a62-0910256d9025-kube-api-access-52tjq\") pod \"marketplace-operator-79b997595-tdtms\" (UID: \"b634ea23-ca70-446c-8a62-0910256d9025\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.272721 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b634ea23-ca70-446c-8a62-0910256d9025-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tdtms\" (UID: \"b634ea23-ca70-446c-8a62-0910256d9025\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.272774 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b634ea23-ca70-446c-8a62-0910256d9025-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tdtms\" (UID: \"b634ea23-ca70-446c-8a62-0910256d9025\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.272821 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52tjq\" (UniqueName: \"kubernetes.io/projected/b634ea23-ca70-446c-8a62-0910256d9025-kube-api-access-52tjq\") pod \"marketplace-operator-79b997595-tdtms\" (UID: \"b634ea23-ca70-446c-8a62-0910256d9025\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.274227 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b634ea23-ca70-446c-8a62-0910256d9025-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tdtms\" (UID: \"b634ea23-ca70-446c-8a62-0910256d9025\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.288371 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b634ea23-ca70-446c-8a62-0910256d9025-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tdtms\" (UID: \"b634ea23-ca70-446c-8a62-0910256d9025\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.291997 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52tjq\" (UniqueName: \"kubernetes.io/projected/b634ea23-ca70-446c-8a62-0910256d9025-kube-api-access-52tjq\") pod \"marketplace-operator-79b997595-tdtms\" (UID: \"b634ea23-ca70-446c-8a62-0910256d9025\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.343908 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbtg9" event={"ID":"d241cf01-4aa5-46af-9900-0c6a7880f9f4","Type":"ContainerDied","Data":"4eb2d91110a213e73d7657b1b5e2ff5b8e7dfdced2b9adfebf33f59ce2732bc9"} Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.343852 4750 generic.go:334] "Generic (PLEG): container finished" podID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerID="4eb2d91110a213e73d7657b1b5e2ff5b8e7dfdced2b9adfebf33f59ce2732bc9" exitCode=0 Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.346334 4750 generic.go:334] "Generic (PLEG): container finished" podID="39156cc1-cba3-4fce-b877-82ee6ac6ce02" containerID="041a8f7701cc78b4fc3d54bcac8d40d07b76a1c9c4b1c4c97d405e881eb9c818" exitCode=0 Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.346389 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" event={"ID":"39156cc1-cba3-4fce-b877-82ee6ac6ce02","Type":"ContainerDied","Data":"041a8f7701cc78b4fc3d54bcac8d40d07b76a1c9c4b1c4c97d405e881eb9c818"} Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.348793 4750 generic.go:334] "Generic (PLEG): container finished" podID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerID="bf4a2c153d9aef0a31213671bbd61ab3c28e40d4c685b42c92441e618ecf5cf0" exitCode=0 Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.348840 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdd5l" event={"ID":"f2d2bcf7-e24d-4879-85df-777e045107ad","Type":"ContainerDied","Data":"bf4a2c153d9aef0a31213671bbd61ab3c28e40d4c685b42c92441e618ecf5cf0"} Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.348901 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdd5l" event={"ID":"f2d2bcf7-e24d-4879-85df-777e045107ad","Type":"ContainerDied","Data":"af9066c3b974bbd28f5f5ab2d0380a30b59dc075216d4546ce5911daf68f6443"} Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.348922 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9066c3b974bbd28f5f5ab2d0380a30b59dc075216d4546ce5911daf68f6443" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.350957 4750 generic.go:334] "Generic (PLEG): container finished" podID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerID="6f15605753cef855d25442b204260d4dd140669bc217ca0eb4db0acffec38435" exitCode=0 Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.351027 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8hcx" event={"ID":"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8","Type":"ContainerDied","Data":"6f15605753cef855d25442b204260d4dd140669bc217ca0eb4db0acffec38435"} Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.353041 4750 generic.go:334] "Generic (PLEG): container finished" podID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerID="967681ad634c5bc057574db531f6490afda5f5f3688bf42472db576e2c7e8db8" exitCode=0 Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.353940 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vxzp" event={"ID":"76bd762e-0ab4-4d0a-8718-2fc5a0f23747","Type":"ContainerDied","Data":"967681ad634c5bc057574db531f6490afda5f5f3688bf42472db576e2c7e8db8"} Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.389020 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.396337 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.515585 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.530945 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.543216 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.578679 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.585625 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-utilities\") pod \"f2d2bcf7-e24d-4879-85df-777e045107ad\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.585723 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-catalog-content\") pod \"f2d2bcf7-e24d-4879-85df-777e045107ad\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.585777 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57t5v\" (UniqueName: \"kubernetes.io/projected/f2d2bcf7-e24d-4879-85df-777e045107ad-kube-api-access-57t5v\") pod \"f2d2bcf7-e24d-4879-85df-777e045107ad\" (UID: \"f2d2bcf7-e24d-4879-85df-777e045107ad\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.588971 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-utilities" (OuterVolumeSpecName: "utilities") pod "f2d2bcf7-e24d-4879-85df-777e045107ad" (UID: "f2d2bcf7-e24d-4879-85df-777e045107ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.593126 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d2bcf7-e24d-4879-85df-777e045107ad-kube-api-access-57t5v" (OuterVolumeSpecName: "kube-api-access-57t5v") pod "f2d2bcf7-e24d-4879-85df-777e045107ad" (UID: "f2d2bcf7-e24d-4879-85df-777e045107ad"). InnerVolumeSpecName "kube-api-access-57t5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.646558 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2d2bcf7-e24d-4879-85df-777e045107ad" (UID: "f2d2bcf7-e24d-4879-85df-777e045107ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686339 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-catalog-content\") pod \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686392 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktfcs\" (UniqueName: \"kubernetes.io/projected/d241cf01-4aa5-46af-9900-0c6a7880f9f4-kube-api-access-ktfcs\") pod \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686425 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cwbt\" (UniqueName: \"kubernetes.io/projected/39156cc1-cba3-4fce-b877-82ee6ac6ce02-kube-api-access-2cwbt\") pod \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686443 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-utilities\") pod \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686464 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-catalog-content\") pod \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686484 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjjcw\" (UniqueName: \"kubernetes.io/projected/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-kube-api-access-zjjcw\") pod \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\" (UID: \"76bd762e-0ab4-4d0a-8718-2fc5a0f23747\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686501 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5hbs\" (UniqueName: \"kubernetes.io/projected/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-kube-api-access-w5hbs\") pod \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686519 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-operator-metrics\") pod \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686543 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-utilities\") pod \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\" (UID: \"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686566 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-trusted-ca\") pod \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\" (UID: \"39156cc1-cba3-4fce-b877-82ee6ac6ce02\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686595 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-utilities\") pod \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686618 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-catalog-content\") pod \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\" (UID: \"d241cf01-4aa5-46af-9900-0c6a7880f9f4\") " Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686815 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686827 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57t5v\" (UniqueName: \"kubernetes.io/projected/f2d2bcf7-e24d-4879-85df-777e045107ad-kube-api-access-57t5v\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.686839 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d2bcf7-e24d-4879-85df-777e045107ad-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.688555 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-utilities" (OuterVolumeSpecName: "utilities") pod "bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" (UID: "bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.689094 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "39156cc1-cba3-4fce-b877-82ee6ac6ce02" (UID: "39156cc1-cba3-4fce-b877-82ee6ac6ce02"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.689312 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-utilities" (OuterVolumeSpecName: "utilities") pod "d241cf01-4aa5-46af-9900-0c6a7880f9f4" (UID: "d241cf01-4aa5-46af-9900-0c6a7880f9f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.689573 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-utilities" (OuterVolumeSpecName: "utilities") pod "76bd762e-0ab4-4d0a-8718-2fc5a0f23747" (UID: "76bd762e-0ab4-4d0a-8718-2fc5a0f23747"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.690365 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d241cf01-4aa5-46af-9900-0c6a7880f9f4-kube-api-access-ktfcs" (OuterVolumeSpecName: "kube-api-access-ktfcs") pod "d241cf01-4aa5-46af-9900-0c6a7880f9f4" (UID: "d241cf01-4aa5-46af-9900-0c6a7880f9f4"). InnerVolumeSpecName "kube-api-access-ktfcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.691167 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-kube-api-access-w5hbs" (OuterVolumeSpecName: "kube-api-access-w5hbs") pod "bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" (UID: "bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8"). InnerVolumeSpecName "kube-api-access-w5hbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.691277 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "39156cc1-cba3-4fce-b877-82ee6ac6ce02" (UID: "39156cc1-cba3-4fce-b877-82ee6ac6ce02"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.691642 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-kube-api-access-zjjcw" (OuterVolumeSpecName: "kube-api-access-zjjcw") pod "76bd762e-0ab4-4d0a-8718-2fc5a0f23747" (UID: "76bd762e-0ab4-4d0a-8718-2fc5a0f23747"). InnerVolumeSpecName "kube-api-access-zjjcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.694770 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39156cc1-cba3-4fce-b877-82ee6ac6ce02-kube-api-access-2cwbt" (OuterVolumeSpecName: "kube-api-access-2cwbt") pod "39156cc1-cba3-4fce-b877-82ee6ac6ce02" (UID: "39156cc1-cba3-4fce-b877-82ee6ac6ce02"). InnerVolumeSpecName "kube-api-access-2cwbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.712711 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76bd762e-0ab4-4d0a-8718-2fc5a0f23747" (UID: "76bd762e-0ab4-4d0a-8718-2fc5a0f23747"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.741915 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" (UID: "bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787679 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787711 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktfcs\" (UniqueName: \"kubernetes.io/projected/d241cf01-4aa5-46af-9900-0c6a7880f9f4-kube-api-access-ktfcs\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787721 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cwbt\" (UniqueName: \"kubernetes.io/projected/39156cc1-cba3-4fce-b877-82ee6ac6ce02-kube-api-access-2cwbt\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787730 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787740 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787750 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjjcw\" (UniqueName: \"kubernetes.io/projected/76bd762e-0ab4-4d0a-8718-2fc5a0f23747-kube-api-access-zjjcw\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787758 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5hbs\" (UniqueName: \"kubernetes.io/projected/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-kube-api-access-w5hbs\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787767 4750 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787776 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787784 4750 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39156cc1-cba3-4fce-b877-82ee6ac6ce02-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.787791 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.833259 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d241cf01-4aa5-46af-9900-0c6a7880f9f4" (UID: "d241cf01-4aa5-46af-9900-0c6a7880f9f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.888585 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d241cf01-4aa5-46af-9900-0c6a7880f9f4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 13:58:42 crc kubenswrapper[4750]: I0214 13:58:42.904278 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tdtms"] Feb 14 13:58:42 crc kubenswrapper[4750]: W0214 13:58:42.915104 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb634ea23_ca70_446c_8a62_0910256d9025.slice/crio-ca653829d073cd3101f1d90c00bfc4cf993cc78e298904343489c444daa42259 WatchSource:0}: Error finding container ca653829d073cd3101f1d90c00bfc4cf993cc78e298904343489c444daa42259: Status 404 returned error can't find the container with id ca653829d073cd3101f1d90c00bfc4cf993cc78e298904343489c444daa42259 Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.360867 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbtg9" event={"ID":"d241cf01-4aa5-46af-9900-0c6a7880f9f4","Type":"ContainerDied","Data":"5896f8ec7e461e82fdea4b1666be00ae518ad18e51fba99e7ca51272b7fa1d9b"} Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.361213 4750 scope.go:117] "RemoveContainer" containerID="4eb2d91110a213e73d7657b1b5e2ff5b8e7dfdced2b9adfebf33f59ce2732bc9" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.361330 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbtg9" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.367284 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.367276 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49gzj" event={"ID":"39156cc1-cba3-4fce-b877-82ee6ac6ce02","Type":"ContainerDied","Data":"b773639f573bb69fa90e24987be9887c863c912c29b42a518103ab98a1f3e1b3"} Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.369939 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8hcx" event={"ID":"bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8","Type":"ContainerDied","Data":"2bbfee6f223314407a1113c27043c47dcbfbf0b9045919a4223a19db866901b0"} Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.369960 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8hcx" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.372149 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vxzp" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.372764 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vxzp" event={"ID":"76bd762e-0ab4-4d0a-8718-2fc5a0f23747","Type":"ContainerDied","Data":"f887654072498dadc74d1d18dd3bce05aa955a2d2b97ad8493db600146d289a6"} Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.376458 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" event={"ID":"b634ea23-ca70-446c-8a62-0910256d9025","Type":"ContainerStarted","Data":"4294975bac247b99502c736dc929151961ba7f918ce9c06e367fd871c758b510"} Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.376652 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" event={"ID":"b634ea23-ca70-446c-8a62-0910256d9025","Type":"ContainerStarted","Data":"ca653829d073cd3101f1d90c00bfc4cf993cc78e298904343489c444daa42259"} Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.376764 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.376604 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdd5l" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.381370 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.386532 4750 scope.go:117] "RemoveContainer" containerID="542e9b1121ae1cbf5215c3adbb679ace9bb80038c1a2913d47e38ed9f34fe1ba" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.405602 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tdtms" podStartSLOduration=2.405582134 podStartE2EDuration="2.405582134s" podCreationTimestamp="2026-02-14 13:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:58:43.390633499 +0000 UTC m=+395.416622990" watchObservedRunningTime="2026-02-14 13:58:43.405582134 +0000 UTC m=+395.431571625" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.427352 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vxzp"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.429170 4750 scope.go:117] "RemoveContainer" containerID="69e44c13ae03c970cbb431d5a0d90d1d40fa350325405294e126e345b1369262" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.441424 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vxzp"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.452527 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49gzj"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.460208 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49gzj"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.473308 4750 scope.go:117] "RemoveContainer" containerID="041a8f7701cc78b4fc3d54bcac8d40d07b76a1c9c4b1c4c97d405e881eb9c818" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.488063 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdd5l"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.494364 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sdd5l"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.502565 4750 scope.go:117] "RemoveContainer" containerID="6f15605753cef855d25442b204260d4dd140669bc217ca0eb4db0acffec38435" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.502693 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hbtg9"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.511019 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hbtg9"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.515922 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8hcx"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.517631 4750 scope.go:117] "RemoveContainer" containerID="790351b5f06b27cad72ca24f95b2ca812b62f9b38181dc67ffafb59ece7a4c20" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.521400 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w8hcx"] Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.537356 4750 scope.go:117] "RemoveContainer" containerID="84d61a4cd235b64139004e40c75c3f7d43e49b50feada19c544d3c0d85736acf" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.553444 4750 scope.go:117] "RemoveContainer" containerID="967681ad634c5bc057574db531f6490afda5f5f3688bf42472db576e2c7e8db8" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.564070 4750 scope.go:117] "RemoveContainer" containerID="3340a86cb5e60b5396023c5d00c1bb5608dd5f7c6beae9470778ccb4d89aa5a6" Feb 14 13:58:43 crc kubenswrapper[4750]: I0214 13:58:43.575344 4750 scope.go:117] "RemoveContainer" containerID="35629182360f25bd3d42e0aa0e612c57cf24e24b6e621bb951f1d7493ad94cd1" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.189368 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8tt7"] Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.189980 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39156cc1-cba3-4fce-b877-82ee6ac6ce02" containerName="marketplace-operator" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190004 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="39156cc1-cba3-4fce-b877-82ee6ac6ce02" containerName="marketplace-operator" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190019 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerName="extract-content" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190027 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerName="extract-content" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190039 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerName="extract-content" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190045 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerName="extract-content" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190058 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerName="extract-content" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190064 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerName="extract-content" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190073 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerName="extract-utilities" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190079 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerName="extract-utilities" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190087 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerName="extract-utilities" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190093 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerName="extract-utilities" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190104 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190130 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190141 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190149 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190160 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerName="extract-content" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190167 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerName="extract-content" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190179 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190207 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190215 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerName="extract-utilities" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190221 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerName="extract-utilities" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190227 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerName="extract-utilities" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190235 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerName="extract-utilities" Feb 14 13:58:44 crc kubenswrapper[4750]: E0214 13:58:44.190243 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190249 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190333 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190343 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190352 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="39156cc1-cba3-4fce-b877-82ee6ac6ce02" containerName="marketplace-operator" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190358 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.190369 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" containerName="registry-server" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.191086 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.192938 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.202533 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8tt7"] Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.306829 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecca3833-89b7-4533-a365-160f8af73d1a-utilities\") pod \"redhat-marketplace-q8tt7\" (UID: \"ecca3833-89b7-4533-a365-160f8af73d1a\") " pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.306883 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lccwp\" (UniqueName: \"kubernetes.io/projected/ecca3833-89b7-4533-a365-160f8af73d1a-kube-api-access-lccwp\") pod \"redhat-marketplace-q8tt7\" (UID: \"ecca3833-89b7-4533-a365-160f8af73d1a\") " pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.307025 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecca3833-89b7-4533-a365-160f8af73d1a-catalog-content\") pod \"redhat-marketplace-q8tt7\" (UID: \"ecca3833-89b7-4533-a365-160f8af73d1a\") " pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.399893 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d89lk"] Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.402441 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.405036 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.408599 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecca3833-89b7-4533-a365-160f8af73d1a-catalog-content\") pod \"redhat-marketplace-q8tt7\" (UID: \"ecca3833-89b7-4533-a365-160f8af73d1a\") " pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.408720 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecca3833-89b7-4533-a365-160f8af73d1a-utilities\") pod \"redhat-marketplace-q8tt7\" (UID: \"ecca3833-89b7-4533-a365-160f8af73d1a\") " pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.408771 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lccwp\" (UniqueName: \"kubernetes.io/projected/ecca3833-89b7-4533-a365-160f8af73d1a-kube-api-access-lccwp\") pod \"redhat-marketplace-q8tt7\" (UID: \"ecca3833-89b7-4533-a365-160f8af73d1a\") " pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.409785 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecca3833-89b7-4533-a365-160f8af73d1a-catalog-content\") pod \"redhat-marketplace-q8tt7\" (UID: \"ecca3833-89b7-4533-a365-160f8af73d1a\") " pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.410226 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecca3833-89b7-4533-a365-160f8af73d1a-utilities\") pod \"redhat-marketplace-q8tt7\" (UID: \"ecca3833-89b7-4533-a365-160f8af73d1a\") " pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.412561 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d89lk"] Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.451089 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lccwp\" (UniqueName: \"kubernetes.io/projected/ecca3833-89b7-4533-a365-160f8af73d1a-kube-api-access-lccwp\") pod \"redhat-marketplace-q8tt7\" (UID: \"ecca3833-89b7-4533-a365-160f8af73d1a\") " pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.509948 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e16d6e0-2460-4802-aef3-14c53a22c5f8-utilities\") pod \"community-operators-d89lk\" (UID: \"0e16d6e0-2460-4802-aef3-14c53a22c5f8\") " pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.509990 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clg8l\" (UniqueName: \"kubernetes.io/projected/0e16d6e0-2460-4802-aef3-14c53a22c5f8-kube-api-access-clg8l\") pod \"community-operators-d89lk\" (UID: \"0e16d6e0-2460-4802-aef3-14c53a22c5f8\") " pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.510016 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e16d6e0-2460-4802-aef3-14c53a22c5f8-catalog-content\") pod \"community-operators-d89lk\" (UID: \"0e16d6e0-2460-4802-aef3-14c53a22c5f8\") " pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.510948 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.611077 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e16d6e0-2460-4802-aef3-14c53a22c5f8-utilities\") pod \"community-operators-d89lk\" (UID: \"0e16d6e0-2460-4802-aef3-14c53a22c5f8\") " pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.611329 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clg8l\" (UniqueName: \"kubernetes.io/projected/0e16d6e0-2460-4802-aef3-14c53a22c5f8-kube-api-access-clg8l\") pod \"community-operators-d89lk\" (UID: \"0e16d6e0-2460-4802-aef3-14c53a22c5f8\") " pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.611389 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e16d6e0-2460-4802-aef3-14c53a22c5f8-catalog-content\") pod \"community-operators-d89lk\" (UID: \"0e16d6e0-2460-4802-aef3-14c53a22c5f8\") " pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.611938 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e16d6e0-2460-4802-aef3-14c53a22c5f8-catalog-content\") pod \"community-operators-d89lk\" (UID: \"0e16d6e0-2460-4802-aef3-14c53a22c5f8\") " pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.612229 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e16d6e0-2460-4802-aef3-14c53a22c5f8-utilities\") pod \"community-operators-d89lk\" (UID: \"0e16d6e0-2460-4802-aef3-14c53a22c5f8\") " pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.637153 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clg8l\" (UniqueName: \"kubernetes.io/projected/0e16d6e0-2460-4802-aef3-14c53a22c5f8-kube-api-access-clg8l\") pod \"community-operators-d89lk\" (UID: \"0e16d6e0-2460-4802-aef3-14c53a22c5f8\") " pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.723293 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.753869 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39156cc1-cba3-4fce-b877-82ee6ac6ce02" path="/var/lib/kubelet/pods/39156cc1-cba3-4fce-b877-82ee6ac6ce02/volumes" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.754714 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76bd762e-0ab4-4d0a-8718-2fc5a0f23747" path="/var/lib/kubelet/pods/76bd762e-0ab4-4d0a-8718-2fc5a0f23747/volumes" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.756247 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8" path="/var/lib/kubelet/pods/bc4d9a25-0fa3-4c70-b5ca-6e7ffa2860f8/volumes" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.762473 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d241cf01-4aa5-46af-9900-0c6a7880f9f4" path="/var/lib/kubelet/pods/d241cf01-4aa5-46af-9900-0c6a7880f9f4/volumes" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.763700 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d2bcf7-e24d-4879-85df-777e045107ad" path="/var/lib/kubelet/pods/f2d2bcf7-e24d-4879-85df-777e045107ad/volumes" Feb 14 13:58:44 crc kubenswrapper[4750]: I0214 13:58:44.938043 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8tt7"] Feb 14 13:58:44 crc kubenswrapper[4750]: W0214 13:58:44.942604 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecca3833_89b7_4533_a365_160f8af73d1a.slice/crio-1d58d1303f0d49485a59fababa657802f6ff4dbb5b0ce51b3509416554933cf5 WatchSource:0}: Error finding container 1d58d1303f0d49485a59fababa657802f6ff4dbb5b0ce51b3509416554933cf5: Status 404 returned error can't find the container with id 1d58d1303f0d49485a59fababa657802f6ff4dbb5b0ce51b3509416554933cf5 Feb 14 13:58:45 crc kubenswrapper[4750]: I0214 13:58:45.108395 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d89lk"] Feb 14 13:58:45 crc kubenswrapper[4750]: W0214 13:58:45.140181 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e16d6e0_2460_4802_aef3_14c53a22c5f8.slice/crio-52d09c72cb34e76c8718e617b4c1b2e887b50ffb5635fa87061fb8ed762b5cbe WatchSource:0}: Error finding container 52d09c72cb34e76c8718e617b4c1b2e887b50ffb5635fa87061fb8ed762b5cbe: Status 404 returned error can't find the container with id 52d09c72cb34e76c8718e617b4c1b2e887b50ffb5635fa87061fb8ed762b5cbe Feb 14 13:58:45 crc kubenswrapper[4750]: I0214 13:58:45.406833 4750 generic.go:334] "Generic (PLEG): container finished" podID="ecca3833-89b7-4533-a365-160f8af73d1a" containerID="582a6f81c61e296ae05c2e721532468a965c1fd8c44d5eafea1c0a00e9ad61b6" exitCode=0 Feb 14 13:58:45 crc kubenswrapper[4750]: I0214 13:58:45.406910 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tt7" event={"ID":"ecca3833-89b7-4533-a365-160f8af73d1a","Type":"ContainerDied","Data":"582a6f81c61e296ae05c2e721532468a965c1fd8c44d5eafea1c0a00e9ad61b6"} Feb 14 13:58:45 crc kubenswrapper[4750]: I0214 13:58:45.406942 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tt7" event={"ID":"ecca3833-89b7-4533-a365-160f8af73d1a","Type":"ContainerStarted","Data":"1d58d1303f0d49485a59fababa657802f6ff4dbb5b0ce51b3509416554933cf5"} Feb 14 13:58:45 crc kubenswrapper[4750]: I0214 13:58:45.408924 4750 generic.go:334] "Generic (PLEG): container finished" podID="0e16d6e0-2460-4802-aef3-14c53a22c5f8" containerID="52f3cd5f6df287e79694d2632f63d377868a17fc9f5ac0e6b9b779b2f1f9bfc8" exitCode=0 Feb 14 13:58:45 crc kubenswrapper[4750]: I0214 13:58:45.409097 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d89lk" event={"ID":"0e16d6e0-2460-4802-aef3-14c53a22c5f8","Type":"ContainerDied","Data":"52f3cd5f6df287e79694d2632f63d377868a17fc9f5ac0e6b9b779b2f1f9bfc8"} Feb 14 13:58:45 crc kubenswrapper[4750]: I0214 13:58:45.409144 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d89lk" event={"ID":"0e16d6e0-2460-4802-aef3-14c53a22c5f8","Type":"ContainerStarted","Data":"52d09c72cb34e76c8718e617b4c1b2e887b50ffb5635fa87061fb8ed762b5cbe"} Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.418508 4750 generic.go:334] "Generic (PLEG): container finished" podID="0e16d6e0-2460-4802-aef3-14c53a22c5f8" containerID="665853a0fe7256d2e34e707fa9f88bf33c4a46f828430ef8af02b8d29268f489" exitCode=0 Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.418597 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d89lk" event={"ID":"0e16d6e0-2460-4802-aef3-14c53a22c5f8","Type":"ContainerDied","Data":"665853a0fe7256d2e34e707fa9f88bf33c4a46f828430ef8af02b8d29268f489"} Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.424951 4750 generic.go:334] "Generic (PLEG): container finished" podID="ecca3833-89b7-4533-a365-160f8af73d1a" containerID="0cc8ab0923441c19e9631a2f2eb22c623819201790f6d9bdddd4c7ccf740bf17" exitCode=0 Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.425203 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tt7" event={"ID":"ecca3833-89b7-4533-a365-160f8af73d1a","Type":"ContainerDied","Data":"0cc8ab0923441c19e9631a2f2eb22c623819201790f6d9bdddd4c7ccf740bf17"} Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.593537 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q58l2"] Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.596139 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.598071 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.602626 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q58l2"] Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.740557 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bs27\" (UniqueName: \"kubernetes.io/projected/6dd3db5f-80db-4c43-acb5-445300c95649-kube-api-access-9bs27\") pod \"certified-operators-q58l2\" (UID: \"6dd3db5f-80db-4c43-acb5-445300c95649\") " pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.740608 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dd3db5f-80db-4c43-acb5-445300c95649-catalog-content\") pod \"certified-operators-q58l2\" (UID: \"6dd3db5f-80db-4c43-acb5-445300c95649\") " pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.740714 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dd3db5f-80db-4c43-acb5-445300c95649-utilities\") pod \"certified-operators-q58l2\" (UID: \"6dd3db5f-80db-4c43-acb5-445300c95649\") " pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.791676 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7jn2x"] Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.793684 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.803860 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.816596 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jn2x"] Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.841692 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bs27\" (UniqueName: \"kubernetes.io/projected/6dd3db5f-80db-4c43-acb5-445300c95649-kube-api-access-9bs27\") pod \"certified-operators-q58l2\" (UID: \"6dd3db5f-80db-4c43-acb5-445300c95649\") " pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.841766 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dd3db5f-80db-4c43-acb5-445300c95649-catalog-content\") pod \"certified-operators-q58l2\" (UID: \"6dd3db5f-80db-4c43-acb5-445300c95649\") " pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.841848 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dd3db5f-80db-4c43-acb5-445300c95649-utilities\") pod \"certified-operators-q58l2\" (UID: \"6dd3db5f-80db-4c43-acb5-445300c95649\") " pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.842222 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dd3db5f-80db-4c43-acb5-445300c95649-utilities\") pod \"certified-operators-q58l2\" (UID: \"6dd3db5f-80db-4c43-acb5-445300c95649\") " pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.842285 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dd3db5f-80db-4c43-acb5-445300c95649-catalog-content\") pod \"certified-operators-q58l2\" (UID: \"6dd3db5f-80db-4c43-acb5-445300c95649\") " pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.866750 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bs27\" (UniqueName: \"kubernetes.io/projected/6dd3db5f-80db-4c43-acb5-445300c95649-kube-api-access-9bs27\") pod \"certified-operators-q58l2\" (UID: \"6dd3db5f-80db-4c43-acb5-445300c95649\") " pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.913179 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.942648 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9078426-1b92-4b30-8529-9ad63d68bf73-utilities\") pod \"redhat-operators-7jn2x\" (UID: \"d9078426-1b92-4b30-8529-9ad63d68bf73\") " pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.942695 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wcpf\" (UniqueName: \"kubernetes.io/projected/d9078426-1b92-4b30-8529-9ad63d68bf73-kube-api-access-2wcpf\") pod \"redhat-operators-7jn2x\" (UID: \"d9078426-1b92-4b30-8529-9ad63d68bf73\") " pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:46 crc kubenswrapper[4750]: I0214 13:58:46.942926 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9078426-1b92-4b30-8529-9ad63d68bf73-catalog-content\") pod \"redhat-operators-7jn2x\" (UID: \"d9078426-1b92-4b30-8529-9ad63d68bf73\") " pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.043937 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wcpf\" (UniqueName: \"kubernetes.io/projected/d9078426-1b92-4b30-8529-9ad63d68bf73-kube-api-access-2wcpf\") pod \"redhat-operators-7jn2x\" (UID: \"d9078426-1b92-4b30-8529-9ad63d68bf73\") " pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.044313 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9078426-1b92-4b30-8529-9ad63d68bf73-catalog-content\") pod \"redhat-operators-7jn2x\" (UID: \"d9078426-1b92-4b30-8529-9ad63d68bf73\") " pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.044356 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9078426-1b92-4b30-8529-9ad63d68bf73-utilities\") pod \"redhat-operators-7jn2x\" (UID: \"d9078426-1b92-4b30-8529-9ad63d68bf73\") " pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.044785 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9078426-1b92-4b30-8529-9ad63d68bf73-utilities\") pod \"redhat-operators-7jn2x\" (UID: \"d9078426-1b92-4b30-8529-9ad63d68bf73\") " pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.044907 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9078426-1b92-4b30-8529-9ad63d68bf73-catalog-content\") pod \"redhat-operators-7jn2x\" (UID: \"d9078426-1b92-4b30-8529-9ad63d68bf73\") " pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.065927 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wcpf\" (UniqueName: \"kubernetes.io/projected/d9078426-1b92-4b30-8529-9ad63d68bf73-kube-api-access-2wcpf\") pod \"redhat-operators-7jn2x\" (UID: \"d9078426-1b92-4b30-8529-9ad63d68bf73\") " pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.126421 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.370669 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q58l2"] Feb 14 13:58:47 crc kubenswrapper[4750]: W0214 13:58:47.374471 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dd3db5f_80db_4c43_acb5_445300c95649.slice/crio-decec4f48bfa5aa365ec859b8625efd88ff8e3cdf4f7732ae3445096475c4064 WatchSource:0}: Error finding container decec4f48bfa5aa365ec859b8625efd88ff8e3cdf4f7732ae3445096475c4064: Status 404 returned error can't find the container with id decec4f48bfa5aa365ec859b8625efd88ff8e3cdf4f7732ae3445096475c4064 Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.434578 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d89lk" event={"ID":"0e16d6e0-2460-4802-aef3-14c53a22c5f8","Type":"ContainerStarted","Data":"cea6fef043c40681d65fed56b83117ccfbe90d3d70d5a3f6ec98fe9c255015ad"} Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.437767 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8tt7" event={"ID":"ecca3833-89b7-4533-a365-160f8af73d1a","Type":"ContainerStarted","Data":"c0163a1136ee27bee4a23f5ed2cb72c40f7516987e8b0d9f1c29cc9b13fb3795"} Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.440554 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q58l2" event={"ID":"6dd3db5f-80db-4c43-acb5-445300c95649","Type":"ContainerStarted","Data":"decec4f48bfa5aa365ec859b8625efd88ff8e3cdf4f7732ae3445096475c4064"} Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.456527 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d89lk" podStartSLOduration=2.027108053 podStartE2EDuration="3.456510912s" podCreationTimestamp="2026-02-14 13:58:44 +0000 UTC" firstStartedPulling="2026-02-14 13:58:45.410704432 +0000 UTC m=+397.436693913" lastFinishedPulling="2026-02-14 13:58:46.840107291 +0000 UTC m=+398.866096772" observedRunningTime="2026-02-14 13:58:47.451429395 +0000 UTC m=+399.477418876" watchObservedRunningTime="2026-02-14 13:58:47.456510912 +0000 UTC m=+399.482500393" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.470562 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8tt7" podStartSLOduration=1.749784913 podStartE2EDuration="3.470542582s" podCreationTimestamp="2026-02-14 13:58:44 +0000 UTC" firstStartedPulling="2026-02-14 13:58:45.408482192 +0000 UTC m=+397.434471673" lastFinishedPulling="2026-02-14 13:58:47.129239861 +0000 UTC m=+399.155229342" observedRunningTime="2026-02-14 13:58:47.469634847 +0000 UTC m=+399.495624328" watchObservedRunningTime="2026-02-14 13:58:47.470542582 +0000 UTC m=+399.496532063" Feb 14 13:58:47 crc kubenswrapper[4750]: I0214 13:58:47.542571 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jn2x"] Feb 14 13:58:47 crc kubenswrapper[4750]: W0214 13:58:47.574729 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9078426_1b92_4b30_8529_9ad63d68bf73.slice/crio-32da5f7e672044dd66c2cb940881719d2fc206dc0c6a24e69a55b8ecd467f691 WatchSource:0}: Error finding container 32da5f7e672044dd66c2cb940881719d2fc206dc0c6a24e69a55b8ecd467f691: Status 404 returned error can't find the container with id 32da5f7e672044dd66c2cb940881719d2fc206dc0c6a24e69a55b8ecd467f691 Feb 14 13:58:48 crc kubenswrapper[4750]: I0214 13:58:48.447288 4750 generic.go:334] "Generic (PLEG): container finished" podID="d9078426-1b92-4b30-8529-9ad63d68bf73" containerID="6f2e3b48bae3bfece22fab89ba61c4332edb35364f4cfd0e356c6c73db041f6b" exitCode=0 Feb 14 13:58:48 crc kubenswrapper[4750]: I0214 13:58:48.447340 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jn2x" event={"ID":"d9078426-1b92-4b30-8529-9ad63d68bf73","Type":"ContainerDied","Data":"6f2e3b48bae3bfece22fab89ba61c4332edb35364f4cfd0e356c6c73db041f6b"} Feb 14 13:58:48 crc kubenswrapper[4750]: I0214 13:58:48.447398 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jn2x" event={"ID":"d9078426-1b92-4b30-8529-9ad63d68bf73","Type":"ContainerStarted","Data":"32da5f7e672044dd66c2cb940881719d2fc206dc0c6a24e69a55b8ecd467f691"} Feb 14 13:58:48 crc kubenswrapper[4750]: I0214 13:58:48.449298 4750 generic.go:334] "Generic (PLEG): container finished" podID="6dd3db5f-80db-4c43-acb5-445300c95649" containerID="89acc4aa593d99d7ad96a0c12da1504da2dba6db851cbfd71f9aceccf8b87aaf" exitCode=0 Feb 14 13:58:48 crc kubenswrapper[4750]: I0214 13:58:48.449389 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q58l2" event={"ID":"6dd3db5f-80db-4c43-acb5-445300c95649","Type":"ContainerDied","Data":"89acc4aa593d99d7ad96a0c12da1504da2dba6db851cbfd71f9aceccf8b87aaf"} Feb 14 13:58:49 crc kubenswrapper[4750]: I0214 13:58:49.457370 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jn2x" event={"ID":"d9078426-1b92-4b30-8529-9ad63d68bf73","Type":"ContainerStarted","Data":"d28be78854966ef00abd51930ec67459181e45e47d219ca67d6cfc360cad16d1"} Feb 14 13:58:49 crc kubenswrapper[4750]: I0214 13:58:49.459531 4750 generic.go:334] "Generic (PLEG): container finished" podID="6dd3db5f-80db-4c43-acb5-445300c95649" containerID="05ba0dd74ed6488e039c30bfa87a2e356c03912b6fd42c72da6b6e216de75950" exitCode=0 Feb 14 13:58:49 crc kubenswrapper[4750]: I0214 13:58:49.459579 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q58l2" event={"ID":"6dd3db5f-80db-4c43-acb5-445300c95649","Type":"ContainerDied","Data":"05ba0dd74ed6488e039c30bfa87a2e356c03912b6fd42c72da6b6e216de75950"} Feb 14 13:58:50 crc kubenswrapper[4750]: I0214 13:58:50.467918 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q58l2" event={"ID":"6dd3db5f-80db-4c43-acb5-445300c95649","Type":"ContainerStarted","Data":"90dae268404f4d4aa3193f6d941de717793b5e204915a2405ebc2f61e06e04bf"} Feb 14 13:58:50 crc kubenswrapper[4750]: I0214 13:58:50.470847 4750 generic.go:334] "Generic (PLEG): container finished" podID="d9078426-1b92-4b30-8529-9ad63d68bf73" containerID="d28be78854966ef00abd51930ec67459181e45e47d219ca67d6cfc360cad16d1" exitCode=0 Feb 14 13:58:50 crc kubenswrapper[4750]: I0214 13:58:50.470925 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jn2x" event={"ID":"d9078426-1b92-4b30-8529-9ad63d68bf73","Type":"ContainerDied","Data":"d28be78854966ef00abd51930ec67459181e45e47d219ca67d6cfc360cad16d1"} Feb 14 13:58:50 crc kubenswrapper[4750]: I0214 13:58:50.489385 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q58l2" podStartSLOduration=3.117230237 podStartE2EDuration="4.489365136s" podCreationTimestamp="2026-02-14 13:58:46 +0000 UTC" firstStartedPulling="2026-02-14 13:58:48.450756642 +0000 UTC m=+400.476746123" lastFinishedPulling="2026-02-14 13:58:49.822891541 +0000 UTC m=+401.848881022" observedRunningTime="2026-02-14 13:58:50.486133079 +0000 UTC m=+402.512122560" watchObservedRunningTime="2026-02-14 13:58:50.489365136 +0000 UTC m=+402.515354637" Feb 14 13:58:51 crc kubenswrapper[4750]: I0214 13:58:51.477826 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jn2x" event={"ID":"d9078426-1b92-4b30-8529-9ad63d68bf73","Type":"ContainerStarted","Data":"be1c7eb7ac59d189c1a877201cc767cf3acf94866c879127ed132ee8a85c97ef"} Feb 14 13:58:51 crc kubenswrapper[4750]: I0214 13:58:51.499382 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7jn2x" podStartSLOduration=2.906807287 podStartE2EDuration="5.499362423s" podCreationTimestamp="2026-02-14 13:58:46 +0000 UTC" firstStartedPulling="2026-02-14 13:58:48.448524292 +0000 UTC m=+400.474513773" lastFinishedPulling="2026-02-14 13:58:51.041079428 +0000 UTC m=+403.067068909" observedRunningTime="2026-02-14 13:58:51.497538243 +0000 UTC m=+403.523527734" watchObservedRunningTime="2026-02-14 13:58:51.499362423 +0000 UTC m=+403.525351904" Feb 14 13:58:54 crc kubenswrapper[4750]: I0214 13:58:54.511427 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:54 crc kubenswrapper[4750]: I0214 13:58:54.511872 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:54 crc kubenswrapper[4750]: I0214 13:58:54.557331 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:54 crc kubenswrapper[4750]: I0214 13:58:54.724194 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:54 crc kubenswrapper[4750]: I0214 13:58:54.724251 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:54 crc kubenswrapper[4750]: I0214 13:58:54.766153 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:55 crc kubenswrapper[4750]: I0214 13:58:55.543716 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8tt7" Feb 14 13:58:55 crc kubenswrapper[4750]: I0214 13:58:55.565606 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d89lk" Feb 14 13:58:56 crc kubenswrapper[4750]: I0214 13:58:56.914051 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:56 crc kubenswrapper[4750]: I0214 13:58:56.914390 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:56 crc kubenswrapper[4750]: I0214 13:58:56.959871 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:58:57 crc kubenswrapper[4750]: I0214 13:58:57.126844 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:57 crc kubenswrapper[4750]: I0214 13:58:57.126892 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:57 crc kubenswrapper[4750]: I0214 13:58:57.185632 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:57 crc kubenswrapper[4750]: I0214 13:58:57.554396 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7jn2x" Feb 14 13:58:57 crc kubenswrapper[4750]: I0214 13:58:57.555863 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q58l2" Feb 14 13:59:00 crc kubenswrapper[4750]: I0214 13:59:00.129317 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 13:59:00 crc kubenswrapper[4750]: I0214 13:59:00.129627 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 13:59:05 crc kubenswrapper[4750]: I0214 13:59:05.996653 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" podUID="6da4864c-6af8-4a50-b55f-c98d904808be" containerName="registry" containerID="cri-o://67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f" gracePeriod=30 Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.457598 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.564753 4750 generic.go:334] "Generic (PLEG): container finished" podID="6da4864c-6af8-4a50-b55f-c98d904808be" containerID="67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f" exitCode=0 Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.564798 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" event={"ID":"6da4864c-6af8-4a50-b55f-c98d904808be","Type":"ContainerDied","Data":"67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f"} Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.564831 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" event={"ID":"6da4864c-6af8-4a50-b55f-c98d904808be","Type":"ContainerDied","Data":"c68dad95a65f303e8d5c2bf7df97cfe04f111f403c31aee2f4fcae055a1760c3"} Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.564848 4750 scope.go:117] "RemoveContainer" containerID="67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.564845 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2zdhw" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.582663 4750 scope.go:117] "RemoveContainer" containerID="67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f" Feb 14 13:59:06 crc kubenswrapper[4750]: E0214 13:59:06.583060 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f\": container with ID starting with 67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f not found: ID does not exist" containerID="67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.583153 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f"} err="failed to get container status \"67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f\": rpc error: code = NotFound desc = could not find container \"67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f\": container with ID starting with 67aef73b1a215d8dccc10ef40d88c27900250af3fdec2d27c8afba519cd3bd0f not found: ID does not exist" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.603526 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trz84\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-kube-api-access-trz84\") pod \"6da4864c-6af8-4a50-b55f-c98d904808be\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.603674 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6da4864c-6af8-4a50-b55f-c98d904808be\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.603697 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6da4864c-6af8-4a50-b55f-c98d904808be-ca-trust-extracted\") pod \"6da4864c-6af8-4a50-b55f-c98d904808be\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.603746 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-registry-certificates\") pod \"6da4864c-6af8-4a50-b55f-c98d904808be\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.603830 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-registry-tls\") pod \"6da4864c-6af8-4a50-b55f-c98d904808be\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.603850 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6da4864c-6af8-4a50-b55f-c98d904808be-installation-pull-secrets\") pod \"6da4864c-6af8-4a50-b55f-c98d904808be\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.603884 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-bound-sa-token\") pod \"6da4864c-6af8-4a50-b55f-c98d904808be\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.603927 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-trusted-ca\") pod \"6da4864c-6af8-4a50-b55f-c98d904808be\" (UID: \"6da4864c-6af8-4a50-b55f-c98d904808be\") " Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.604590 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6da4864c-6af8-4a50-b55f-c98d904808be" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.605056 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6da4864c-6af8-4a50-b55f-c98d904808be" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.610623 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6da4864c-6af8-4a50-b55f-c98d904808be" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.611834 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6da4864c-6af8-4a50-b55f-c98d904808be" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.612426 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da4864c-6af8-4a50-b55f-c98d904808be-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6da4864c-6af8-4a50-b55f-c98d904808be" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.612588 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-kube-api-access-trz84" (OuterVolumeSpecName: "kube-api-access-trz84") pod "6da4864c-6af8-4a50-b55f-c98d904808be" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be"). InnerVolumeSpecName "kube-api-access-trz84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.620234 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6da4864c-6af8-4a50-b55f-c98d904808be" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.626466 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da4864c-6af8-4a50-b55f-c98d904808be-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6da4864c-6af8-4a50-b55f-c98d904808be" (UID: "6da4864c-6af8-4a50-b55f-c98d904808be"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.704565 4750 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.704608 4750 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6da4864c-6af8-4a50-b55f-c98d904808be-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.704620 4750 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.704628 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.704637 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trz84\" (UniqueName: \"kubernetes.io/projected/6da4864c-6af8-4a50-b55f-c98d904808be-kube-api-access-trz84\") on node \"crc\" DevicePath \"\"" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.704646 4750 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6da4864c-6af8-4a50-b55f-c98d904808be-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.704653 4750 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6da4864c-6af8-4a50-b55f-c98d904808be-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.885661 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2zdhw"] Feb 14 13:59:06 crc kubenswrapper[4750]: I0214 13:59:06.893672 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2zdhw"] Feb 14 13:59:08 crc kubenswrapper[4750]: I0214 13:59:08.756436 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da4864c-6af8-4a50-b55f-c98d904808be" path="/var/lib/kubelet/pods/6da4864c-6af8-4a50-b55f-c98d904808be/volumes" Feb 14 13:59:08 crc kubenswrapper[4750]: I0214 13:59:08.914858 4750 scope.go:117] "RemoveContainer" containerID="96104d37d0484b8afe982959520235bc2ccae596d7c5eefec63a4b405b4aea96" Feb 14 13:59:08 crc kubenswrapper[4750]: I0214 13:59:08.936394 4750 scope.go:117] "RemoveContainer" containerID="7999878d5bd84bf1e8dbb3291be1793a2d30df52f5511130fbb88e14b1aa1e08" Feb 14 13:59:08 crc kubenswrapper[4750]: I0214 13:59:08.951471 4750 scope.go:117] "RemoveContainer" containerID="5f59d30da001af9edba87cb46982b5a78fef0ce7633fe41e73137871d2592701" Feb 14 13:59:08 crc kubenswrapper[4750]: I0214 13:59:08.966049 4750 scope.go:117] "RemoveContainer" containerID="b8f317d4931c1087d1ff906bee8d52d172f3940875c367dbe5d8a2b43f9cc80b" Feb 14 13:59:08 crc kubenswrapper[4750]: I0214 13:59:08.979759 4750 scope.go:117] "RemoveContainer" containerID="aeb7a5015b602e7de590e95bc67e02acc2dfaa6c7222ea0e209f5ac4844afa4a" Feb 14 13:59:08 crc kubenswrapper[4750]: I0214 13:59:08.991166 4750 scope.go:117] "RemoveContainer" containerID="dacb2407771d539bbd3bd7af36caec14d2303c6b78f5793b894c94faa8fd835e" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.770717 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g"] Feb 14 13:59:13 crc kubenswrapper[4750]: E0214 13:59:13.771784 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da4864c-6af8-4a50-b55f-c98d904808be" containerName="registry" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.771801 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da4864c-6af8-4a50-b55f-c98d904808be" containerName="registry" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.772248 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da4864c-6af8-4a50-b55f-c98d904808be" containerName="registry" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.773952 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.778169 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.778497 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.780004 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.780949 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.782517 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.792820 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g"] Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.800565 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvh4c\" (UniqueName: \"kubernetes.io/projected/d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8-kube-api-access-vvh4c\") pod \"cluster-monitoring-operator-6d5b84845-rzh5g\" (UID: \"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.800658 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rzh5g\" (UID: \"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.800815 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rzh5g\" (UID: \"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.902172 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rzh5g\" (UID: \"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.902254 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvh4c\" (UniqueName: \"kubernetes.io/projected/d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8-kube-api-access-vvh4c\") pod \"cluster-monitoring-operator-6d5b84845-rzh5g\" (UID: \"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.902273 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rzh5g\" (UID: \"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.904781 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rzh5g\" (UID: \"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.911578 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rzh5g\" (UID: \"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:13 crc kubenswrapper[4750]: I0214 13:59:13.924513 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvh4c\" (UniqueName: \"kubernetes.io/projected/d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8-kube-api-access-vvh4c\") pod \"cluster-monitoring-operator-6d5b84845-rzh5g\" (UID: \"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:14 crc kubenswrapper[4750]: I0214 13:59:14.105891 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" Feb 14 13:59:14 crc kubenswrapper[4750]: I0214 13:59:14.409303 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g"] Feb 14 13:59:14 crc kubenswrapper[4750]: I0214 13:59:14.620151 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" event={"ID":"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8","Type":"ContainerStarted","Data":"3b29ddfeb1f92d6f8c7ac2e41007e40918dc734063c793ab45d2306786518b97"} Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.557380 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z"] Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.558496 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.559921 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.560047 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-nvk7l" Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.568839 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z"] Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.632909 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" event={"ID":"d98cc4c8-e0bc-4455-b1ab-87ca2b6b01c8","Type":"ContainerStarted","Data":"11dfd18a272955e7cf750cc3458f68b022c5f3c42726e131ad831eeccb9cb5f5"} Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.649215 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rzh5g" podStartSLOduration=2.123476184 podStartE2EDuration="3.649194748s" podCreationTimestamp="2026-02-14 13:59:13 +0000 UTC" firstStartedPulling="2026-02-14 13:59:14.420106801 +0000 UTC m=+426.446096292" lastFinishedPulling="2026-02-14 13:59:15.945825335 +0000 UTC m=+427.971814856" observedRunningTime="2026-02-14 13:59:16.646196436 +0000 UTC m=+428.672185927" watchObservedRunningTime="2026-02-14 13:59:16.649194748 +0000 UTC m=+428.675184239" Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.738079 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62297907-4b6c-4638-bf2a-85c87d7f99ed-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-whb2z\" (UID: \"62297907-4b6c-4638-bf2a-85c87d7f99ed\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.839740 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62297907-4b6c-4638-bf2a-85c87d7f99ed-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-whb2z\" (UID: \"62297907-4b6c-4638-bf2a-85c87d7f99ed\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.845209 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62297907-4b6c-4638-bf2a-85c87d7f99ed-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-whb2z\" (UID: \"62297907-4b6c-4638-bf2a-85c87d7f99ed\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" Feb 14 13:59:16 crc kubenswrapper[4750]: I0214 13:59:16.871848 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" Feb 14 13:59:17 crc kubenswrapper[4750]: I0214 13:59:17.321692 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z"] Feb 14 13:59:17 crc kubenswrapper[4750]: I0214 13:59:17.644734 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" event={"ID":"62297907-4b6c-4638-bf2a-85c87d7f99ed","Type":"ContainerStarted","Data":"4e80228d704c35d22fe7081bea46223a9bc0f28289bf21032a92931cdb918fc7"} Feb 14 13:59:19 crc kubenswrapper[4750]: I0214 13:59:19.655375 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" event={"ID":"62297907-4b6c-4638-bf2a-85c87d7f99ed","Type":"ContainerStarted","Data":"5093c01bcc21537a3296ae7ef90fb72ff589e32599dc107d2f6c98a300806d70"} Feb 14 13:59:19 crc kubenswrapper[4750]: I0214 13:59:19.655645 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" Feb 14 13:59:19 crc kubenswrapper[4750]: I0214 13:59:19.664609 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" Feb 14 13:59:19 crc kubenswrapper[4750]: I0214 13:59:19.679869 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-whb2z" podStartSLOduration=1.950932224 podStartE2EDuration="3.679849952s" podCreationTimestamp="2026-02-14 13:59:16 +0000 UTC" firstStartedPulling="2026-02-14 13:59:17.335483669 +0000 UTC m=+429.361473150" lastFinishedPulling="2026-02-14 13:59:19.064401387 +0000 UTC m=+431.090390878" observedRunningTime="2026-02-14 13:59:19.670388237 +0000 UTC m=+431.696377738" watchObservedRunningTime="2026-02-14 13:59:19.679849952 +0000 UTC m=+431.705839443" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.649637 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-z75kt"] Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.650894 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.653193 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.653294 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.653357 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-q8x6j" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.653384 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.660470 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-z75kt"] Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.807208 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.807679 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.807705 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-metrics-client-ca\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.807753 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvlx\" (UniqueName: \"kubernetes.io/projected/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-kube-api-access-9wvlx\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.909100 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.909309 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.909346 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-metrics-client-ca\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.909377 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wvlx\" (UniqueName: \"kubernetes.io/projected/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-kube-api-access-9wvlx\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: E0214 13:59:20.909656 4750 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 14 13:59:20 crc kubenswrapper[4750]: E0214 13:59:20.909749 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-prometheus-operator-tls podName:63f577a3-bc9d-4e68-983a-ad6fa82ddb78 nodeName:}" failed. No retries permitted until 2026-02-14 13:59:21.409726036 +0000 UTC m=+433.435715517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-prometheus-operator-tls") pod "prometheus-operator-db54df47d-z75kt" (UID: "63f577a3-bc9d-4e68-983a-ad6fa82ddb78") : secret "prometheus-operator-tls" not found Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.911254 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-metrics-client-ca\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.915358 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:20 crc kubenswrapper[4750]: I0214 13:59:20.951923 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wvlx\" (UniqueName: \"kubernetes.io/projected/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-kube-api-access-9wvlx\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:21 crc kubenswrapper[4750]: I0214 13:59:21.417225 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:21 crc kubenswrapper[4750]: I0214 13:59:21.422923 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/63f577a3-bc9d-4e68-983a-ad6fa82ddb78-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-z75kt\" (UID: \"63f577a3-bc9d-4e68-983a-ad6fa82ddb78\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:21 crc kubenswrapper[4750]: I0214 13:59:21.568500 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" Feb 14 13:59:22 crc kubenswrapper[4750]: I0214 13:59:22.040671 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-z75kt"] Feb 14 13:59:22 crc kubenswrapper[4750]: W0214 13:59:22.052740 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63f577a3_bc9d_4e68_983a_ad6fa82ddb78.slice/crio-721bfd699cb51a4d140184c3209ffb7391b10f9d752b787393ca78e34788490c WatchSource:0}: Error finding container 721bfd699cb51a4d140184c3209ffb7391b10f9d752b787393ca78e34788490c: Status 404 returned error can't find the container with id 721bfd699cb51a4d140184c3209ffb7391b10f9d752b787393ca78e34788490c Feb 14 13:59:22 crc kubenswrapper[4750]: I0214 13:59:22.677091 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" event={"ID":"63f577a3-bc9d-4e68-983a-ad6fa82ddb78","Type":"ContainerStarted","Data":"721bfd699cb51a4d140184c3209ffb7391b10f9d752b787393ca78e34788490c"} Feb 14 13:59:24 crc kubenswrapper[4750]: I0214 13:59:24.704207 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" event={"ID":"63f577a3-bc9d-4e68-983a-ad6fa82ddb78","Type":"ContainerStarted","Data":"4cb180729c9153253881c967515aeebfe3ea5cb7f733453d1c0cc3fa98524576"} Feb 14 13:59:24 crc kubenswrapper[4750]: I0214 13:59:24.704536 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" event={"ID":"63f577a3-bc9d-4e68-983a-ad6fa82ddb78","Type":"ContainerStarted","Data":"3ce09937b5a70aa3a3f0c4f1f4755b7ee72f419b13dccccf21359c0b306a82af"} Feb 14 13:59:24 crc kubenswrapper[4750]: I0214 13:59:24.733999 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-z75kt" podStartSLOduration=3.066539584 podStartE2EDuration="4.733875049s" podCreationTimestamp="2026-02-14 13:59:20 +0000 UTC" firstStartedPulling="2026-02-14 13:59:22.056466178 +0000 UTC m=+434.082455659" lastFinishedPulling="2026-02-14 13:59:23.723801603 +0000 UTC m=+435.749791124" observedRunningTime="2026-02-14 13:59:24.725065091 +0000 UTC m=+436.751054572" watchObservedRunningTime="2026-02-14 13:59:24.733875049 +0000 UTC m=+436.759864530" Feb 14 13:59:26 crc kubenswrapper[4750]: I0214 13:59:26.999572 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-486j8"] Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.000835 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.005315 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.007242 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.015739 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-486j8"] Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.016779 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-bx4ld" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.052979 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m"] Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.054340 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.056815 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-92q5f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.057027 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.057066 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.057992 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-fjd7f"] Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.058364 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.059550 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.061681 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-pj2lx" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.061879 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.062141 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.063239 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m"] Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.099003 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6bc62182-fc55-491e-b9ec-110ae12aa8d4-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.099257 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6bc62182-fc55-491e-b9ec-110ae12aa8d4-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.099381 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6bc62182-fc55-491e-b9ec-110ae12aa8d4-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.099481 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9dql\" (UniqueName: \"kubernetes.io/projected/6bc62182-fc55-491e-b9ec-110ae12aa8d4-kube-api-access-f9dql\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200605 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1df19e59-7b02-48ec-a908-b41003c19c8e-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200655 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200682 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-textfile\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200705 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2a2f7475-f649-4e61-8151-32c0ff6ba07a-sys\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200724 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlcpl\" (UniqueName: \"kubernetes.io/projected/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-api-access-xlcpl\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200748 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6bc62182-fc55-491e-b9ec-110ae12aa8d4-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200768 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6bc62182-fc55-491e-b9ec-110ae12aa8d4-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200784 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6bc62182-fc55-491e-b9ec-110ae12aa8d4-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200808 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200825 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1df19e59-7b02-48ec-a908-b41003c19c8e-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200841 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9dql\" (UniqueName: \"kubernetes.io/projected/6bc62182-fc55-491e-b9ec-110ae12aa8d4-kube-api-access-f9dql\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200860 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-wtmp\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200883 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2a2f7475-f649-4e61-8151-32c0ff6ba07a-root\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200898 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7knhq\" (UniqueName: \"kubernetes.io/projected/2a2f7475-f649-4e61-8151-32c0ff6ba07a-kube-api-access-7knhq\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200921 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200938 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-tls\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200956 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2a2f7475-f649-4e61-8151-32c0ff6ba07a-metrics-client-ca\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.200980 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.201930 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6bc62182-fc55-491e-b9ec-110ae12aa8d4-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.208044 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6bc62182-fc55-491e-b9ec-110ae12aa8d4-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.218705 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9dql\" (UniqueName: \"kubernetes.io/projected/6bc62182-fc55-491e-b9ec-110ae12aa8d4-kube-api-access-f9dql\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.222484 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6bc62182-fc55-491e-b9ec-110ae12aa8d4-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-486j8\" (UID: \"6bc62182-fc55-491e-b9ec-110ae12aa8d4\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301742 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2a2f7475-f649-4e61-8151-32c0ff6ba07a-metrics-client-ca\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301787 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301827 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1df19e59-7b02-48ec-a908-b41003c19c8e-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301848 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301865 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-textfile\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301886 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2a2f7475-f649-4e61-8151-32c0ff6ba07a-sys\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301904 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlcpl\" (UniqueName: \"kubernetes.io/projected/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-api-access-xlcpl\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301935 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301953 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1df19e59-7b02-48ec-a908-b41003c19c8e-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301972 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-wtmp\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.301988 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2a2f7475-f649-4e61-8151-32c0ff6ba07a-root\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.302003 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7knhq\" (UniqueName: \"kubernetes.io/projected/2a2f7475-f649-4e61-8151-32c0ff6ba07a-kube-api-access-7knhq\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.302026 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.302042 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-tls\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.302696 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2a2f7475-f649-4e61-8151-32c0ff6ba07a-root\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.302808 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2a2f7475-f649-4e61-8151-32c0ff6ba07a-metrics-client-ca\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.302897 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-textfile\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.303000 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-wtmp\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.303054 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2a2f7475-f649-4e61-8151-32c0ff6ba07a-sys\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.303155 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1df19e59-7b02-48ec-a908-b41003c19c8e-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.303713 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.303952 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1df19e59-7b02-48ec-a908-b41003c19c8e-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.305585 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.305704 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.305858 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-tls\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.306306 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2a2f7475-f649-4e61-8151-32c0ff6ba07a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.317474 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.326663 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlcpl\" (UniqueName: \"kubernetes.io/projected/1df19e59-7b02-48ec-a908-b41003c19c8e-kube-api-access-xlcpl\") pod \"kube-state-metrics-777cb5bd5d-frf6m\" (UID: \"1df19e59-7b02-48ec-a908-b41003c19c8e\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.330693 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7knhq\" (UniqueName: \"kubernetes.io/projected/2a2f7475-f649-4e61-8151-32c0ff6ba07a-kube-api-access-7knhq\") pod \"node-exporter-fjd7f\" (UID: \"2a2f7475-f649-4e61-8151-32c0ff6ba07a\") " pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.368523 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.374651 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-fjd7f" Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.728808 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fjd7f" event={"ID":"2a2f7475-f649-4e61-8151-32c0ff6ba07a","Type":"ContainerStarted","Data":"60e58ac360a27e29e18c4169fe50f378c606593fc8e9e626c385c339e32ea589"} Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.756951 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-486j8"] Feb 14 13:59:27 crc kubenswrapper[4750]: I0214 13:59:27.818340 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m"] Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.094593 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.096556 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.098946 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-f6rrj" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.099159 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.099260 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.099705 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.099804 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.100596 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.100712 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.100838 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.115221 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.116773 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216500 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216717 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216738 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/0b16c32a-6002-4d87-8813-af7f0319a759-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216789 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b16c32a-6002-4d87-8813-af7f0319a759-config-out\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216809 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216831 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-config-volume\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216844 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b16c32a-6002-4d87-8813-af7f0319a759-tls-assets\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216861 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-web-config\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216880 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b16c32a-6002-4d87-8813-af7f0319a759-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.216948 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.217130 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b16c32a-6002-4d87-8813-af7f0319a759-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.217151 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm4gn\" (UniqueName: \"kubernetes.io/projected/0b16c32a-6002-4d87-8813-af7f0319a759-kube-api-access-xm4gn\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318282 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-config-volume\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318325 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b16c32a-6002-4d87-8813-af7f0319a759-tls-assets\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318345 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-web-config\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318377 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b16c32a-6002-4d87-8813-af7f0319a759-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318400 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318477 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b16c32a-6002-4d87-8813-af7f0319a759-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318503 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm4gn\" (UniqueName: \"kubernetes.io/projected/0b16c32a-6002-4d87-8813-af7f0319a759-kube-api-access-xm4gn\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318544 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318573 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318596 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/0b16c32a-6002-4d87-8813-af7f0319a759-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318653 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b16c32a-6002-4d87-8813-af7f0319a759-config-out\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.318681 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: E0214 13:59:28.319347 4750 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 14 13:59:28 crc kubenswrapper[4750]: E0214 13:59:28.319406 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-main-tls podName:0b16c32a-6002-4d87-8813-af7f0319a759 nodeName:}" failed. No retries permitted until 2026-02-14 13:59:28.81938569 +0000 UTC m=+440.845375171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "0b16c32a-6002-4d87-8813-af7f0319a759") : secret "alertmanager-main-tls" not found Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.320050 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/0b16c32a-6002-4d87-8813-af7f0319a759-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.320995 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b16c32a-6002-4d87-8813-af7f0319a759-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.321954 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b16c32a-6002-4d87-8813-af7f0319a759-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.325682 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.325928 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.325956 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b16c32a-6002-4d87-8813-af7f0319a759-config-out\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.325987 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-config-volume\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.326335 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b16c32a-6002-4d87-8813-af7f0319a759-tls-assets\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.325950 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-web-config\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.339123 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.341979 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm4gn\" (UniqueName: \"kubernetes.io/projected/0b16c32a-6002-4d87-8813-af7f0319a759-kube-api-access-xm4gn\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.738642 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" event={"ID":"1df19e59-7b02-48ec-a908-b41003c19c8e","Type":"ContainerStarted","Data":"0288661908cb45237da349772642117cf2ebe08c30e71785e87c9178d42983c5"} Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.740526 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fjd7f" event={"ID":"2a2f7475-f649-4e61-8151-32c0ff6ba07a","Type":"ContainerStarted","Data":"6c2134d62156edeba91ed5e29bf00d21ad9830d913aafa6af3f31cb53518e287"} Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.752880 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" event={"ID":"6bc62182-fc55-491e-b9ec-110ae12aa8d4","Type":"ContainerStarted","Data":"1465c3fb5d061123a31b2e10a82dd96c20d6554c210fe7c1bede268f2ff02ffc"} Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.753263 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" event={"ID":"6bc62182-fc55-491e-b9ec-110ae12aa8d4","Type":"ContainerStarted","Data":"29b00b65e62c3961f3d9f87bad688e8a8aad4a1546f8e3b1d9057575b9d53ccb"} Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.753282 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" event={"ID":"6bc62182-fc55-491e-b9ec-110ae12aa8d4","Type":"ContainerStarted","Data":"d2e6c34e243cbb34e52ae7c084010d5b380dbfe1a07cd07a2f914854b2eeeffd"} Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.827510 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:28 crc kubenswrapper[4750]: I0214 13:59:28.836582 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/0b16c32a-6002-4d87-8813-af7f0319a759-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"0b16c32a-6002-4d87-8813-af7f0319a759\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.011609 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.101649 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-579d545c98-2ddxx"] Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.109928 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.116062 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-579d545c98-2ddxx"] Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.124235 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-4pm6c" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.129814 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.130138 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.130400 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.130470 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8okq00pupljje" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.130234 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.131693 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.235123 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.235410 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-grpc-tls\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.235438 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.235515 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-tls\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.235662 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.235726 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.235808 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzm2\" (UniqueName: \"kubernetes.io/projected/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-kube-api-access-jwzm2\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.235850 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-metrics-client-ca\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.336856 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.336924 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwzm2\" (UniqueName: \"kubernetes.io/projected/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-kube-api-access-jwzm2\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.336959 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-metrics-client-ca\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.337004 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.337022 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-grpc-tls\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.337039 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.337087 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-tls\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.337144 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.338645 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-metrics-client-ca\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.345504 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.345989 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-tls\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.346696 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.348837 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.349437 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.356736 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-secret-grpc-tls\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.356979 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwzm2\" (UniqueName: \"kubernetes.io/projected/d0bbbe40-b0b1-4ee8-a52c-ff51431988f5-kube-api-access-jwzm2\") pod \"thanos-querier-579d545c98-2ddxx\" (UID: \"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5\") " pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.440699 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.467250 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.750512 4750 generic.go:334] "Generic (PLEG): container finished" podID="2a2f7475-f649-4e61-8151-32c0ff6ba07a" containerID="6c2134d62156edeba91ed5e29bf00d21ad9830d913aafa6af3f31cb53518e287" exitCode=0 Feb 14 13:59:29 crc kubenswrapper[4750]: I0214 13:59:29.750550 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fjd7f" event={"ID":"2a2f7475-f649-4e61-8151-32c0ff6ba07a","Type":"ContainerDied","Data":"6c2134d62156edeba91ed5e29bf00d21ad9830d913aafa6af3f31cb53518e287"} Feb 14 13:59:29 crc kubenswrapper[4750]: W0214 13:59:29.826364 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b16c32a_6002_4d87_8813_af7f0319a759.slice/crio-13016f9a18fdadd8a32a3f0bcfd2121b9519f6f249226785ef5127287d053a2e WatchSource:0}: Error finding container 13016f9a18fdadd8a32a3f0bcfd2121b9519f6f249226785ef5127287d053a2e: Status 404 returned error can't find the container with id 13016f9a18fdadd8a32a3f0bcfd2121b9519f6f249226785ef5127287d053a2e Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.128764 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.129083 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.129310 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.130013 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed0706ee349da09831b26e5d759efb9be0265de4607faf38c0a7fea0110aee8e"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.130068 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://ed0706ee349da09831b26e5d759efb9be0265de4607faf38c0a7fea0110aee8e" gracePeriod=600 Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.307832 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-579d545c98-2ddxx"] Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.761233 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" event={"ID":"6bc62182-fc55-491e-b9ec-110ae12aa8d4","Type":"ContainerStarted","Data":"38f6ed3aeb83d9daf7b60c82f75b3623771a88904a52933d210ae0b8240ec170"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.775252 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" event={"ID":"1df19e59-7b02-48ec-a908-b41003c19c8e","Type":"ContainerStarted","Data":"8eeea10a12b140e6bc68b69c54e434a67441ebbd3de3fa45b9969722a7669e01"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.775308 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" event={"ID":"1df19e59-7b02-48ec-a908-b41003c19c8e","Type":"ContainerStarted","Data":"bbd9d5b383d978fb4be50622f02a542deb67106ec75fd80c322bff411a17e950"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.775324 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" event={"ID":"1df19e59-7b02-48ec-a908-b41003c19c8e","Type":"ContainerStarted","Data":"52cc6b98dc60cf84388b001ff7bebf62bec704eb86a1c0b6242ecdb855483d63"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.777387 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" event={"ID":"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5","Type":"ContainerStarted","Data":"eb20a70816ecd8035a589fe32a8a04686d4aea20c978791dee3924c3bb302ce3"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.780071 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="ed0706ee349da09831b26e5d759efb9be0265de4607faf38c0a7fea0110aee8e" exitCode=0 Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.780133 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"ed0706ee349da09831b26e5d759efb9be0265de4607faf38c0a7fea0110aee8e"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.780165 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"16d8f67a0e99f472e2aad782f2e41524ac5fa5eb8cfb90a1e3ce5626b6571b16"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.780187 4750 scope.go:117] "RemoveContainer" containerID="00b0b46fc640d21d4991d4ffa39e23f3396b9ce4c5d1d0427a6aa625cbdcb53b" Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.781279 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b16c32a-6002-4d87-8813-af7f0319a759","Type":"ContainerStarted","Data":"13016f9a18fdadd8a32a3f0bcfd2121b9519f6f249226785ef5127287d053a2e"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.786617 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-486j8" podStartSLOduration=2.952002162 podStartE2EDuration="4.786600978s" podCreationTimestamp="2026-02-14 13:59:26 +0000 UTC" firstStartedPulling="2026-02-14 13:59:28.02907125 +0000 UTC m=+440.055060731" lastFinishedPulling="2026-02-14 13:59:29.863670036 +0000 UTC m=+441.889659547" observedRunningTime="2026-02-14 13:59:30.784039528 +0000 UTC m=+442.810029039" watchObservedRunningTime="2026-02-14 13:59:30.786600978 +0000 UTC m=+442.812590469" Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.794894 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fjd7f" event={"ID":"2a2f7475-f649-4e61-8151-32c0ff6ba07a","Type":"ContainerStarted","Data":"323019bdcf9b8aae27f96b7bc0b80b27a41424dd17b663710510e44a1538591b"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.794933 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fjd7f" event={"ID":"2a2f7475-f649-4e61-8151-32c0ff6ba07a","Type":"ContainerStarted","Data":"d4c2f4fa7de4245f0f0393cc7aaa0116c5c03218549cddfb76a6591f2075157a"} Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.823158 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-frf6m" podStartSLOduration=1.787352458 podStartE2EDuration="3.823137686s" podCreationTimestamp="2026-02-14 13:59:27 +0000 UTC" firstStartedPulling="2026-02-14 13:59:27.827915149 +0000 UTC m=+439.853904620" lastFinishedPulling="2026-02-14 13:59:29.863700367 +0000 UTC m=+441.889689848" observedRunningTime="2026-02-14 13:59:30.821536633 +0000 UTC m=+442.847526114" watchObservedRunningTime="2026-02-14 13:59:30.823137686 +0000 UTC m=+442.849127167" Feb 14 13:59:30 crc kubenswrapper[4750]: I0214 13:59:30.842543 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-fjd7f" podStartSLOduration=2.745175592 podStartE2EDuration="3.84252078s" podCreationTimestamp="2026-02-14 13:59:27 +0000 UTC" firstStartedPulling="2026-02-14 13:59:27.445198248 +0000 UTC m=+439.471187729" lastFinishedPulling="2026-02-14 13:59:28.542543426 +0000 UTC m=+440.568532917" observedRunningTime="2026-02-14 13:59:30.836176288 +0000 UTC m=+442.862165789" watchObservedRunningTime="2026-02-14 13:59:30.84252078 +0000 UTC m=+442.868510261" Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.811816 4750 generic.go:334] "Generic (PLEG): container finished" podID="0b16c32a-6002-4d87-8813-af7f0319a759" containerID="3790d1e3462670a3409b16a83f934757e400e8f691b2a1c64ea508b9143eb6db" exitCode=0 Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.813221 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b16c32a-6002-4d87-8813-af7f0319a759","Type":"ContainerDied","Data":"3790d1e3462670a3409b16a83f934757e400e8f691b2a1c64ea508b9143eb6db"} Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.840064 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c8cfb5cfd-g8zdh"] Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.851906 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.858664 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c8cfb5cfd-g8zdh"] Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.973974 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-service-ca\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.974396 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjs4x\" (UniqueName: \"kubernetes.io/projected/7876ddbf-401e-42e3-887a-daf2787a7dd1-kube-api-access-mjs4x\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.974463 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-oauth-serving-cert\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.974489 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-oauth-config\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.974620 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-trusted-ca-bundle\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.974666 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-config\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:31 crc kubenswrapper[4750]: I0214 13:59:31.974735 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-serving-cert\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.076571 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-service-ca\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.076629 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjs4x\" (UniqueName: \"kubernetes.io/projected/7876ddbf-401e-42e3-887a-daf2787a7dd1-kube-api-access-mjs4x\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.076659 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-oauth-serving-cert\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.076681 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-oauth-config\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.076708 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-trusted-ca-bundle\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.076726 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-config\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.076749 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-serving-cert\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.077787 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-service-ca\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.077987 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-trusted-ca-bundle\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.078237 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-config\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.078856 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-oauth-serving-cert\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.084842 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-oauth-config\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.096709 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-serving-cert\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.098490 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjs4x\" (UniqueName: \"kubernetes.io/projected/7876ddbf-401e-42e3-887a-daf2787a7dd1-kube-api-access-mjs4x\") pod \"console-7c8cfb5cfd-g8zdh\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.184626 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.407730 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-55978c67c-dxgvg"] Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.408661 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.412343 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-55978c67c-dxgvg"] Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.415418 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.416292 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.416651 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.416987 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-evq0aufbhmsee" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.417262 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-z6h88" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.417553 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.583130 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c8008348-06f3-48da-83ea-b5e38e8a165c-secret-metrics-client-certs\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.583540 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c8008348-06f3-48da-83ea-b5e38e8a165c-audit-log\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.583650 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8008348-06f3-48da-83ea-b5e38e8a165c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.583746 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbwx8\" (UniqueName: \"kubernetes.io/projected/c8008348-06f3-48da-83ea-b5e38e8a165c-kube-api-access-wbwx8\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.583824 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8008348-06f3-48da-83ea-b5e38e8a165c-client-ca-bundle\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.583909 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c8008348-06f3-48da-83ea-b5e38e8a165c-metrics-server-audit-profiles\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.583936 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c8008348-06f3-48da-83ea-b5e38e8a165c-secret-metrics-server-tls\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.689004 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8008348-06f3-48da-83ea-b5e38e8a165c-client-ca-bundle\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.689087 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c8008348-06f3-48da-83ea-b5e38e8a165c-metrics-server-audit-profiles\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.689131 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c8008348-06f3-48da-83ea-b5e38e8a165c-secret-metrics-server-tls\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.689164 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c8008348-06f3-48da-83ea-b5e38e8a165c-secret-metrics-client-certs\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.689210 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c8008348-06f3-48da-83ea-b5e38e8a165c-audit-log\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.689245 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8008348-06f3-48da-83ea-b5e38e8a165c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.689290 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbwx8\" (UniqueName: \"kubernetes.io/projected/c8008348-06f3-48da-83ea-b5e38e8a165c-kube-api-access-wbwx8\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.690388 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c8008348-06f3-48da-83ea-b5e38e8a165c-metrics-server-audit-profiles\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.690890 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c8008348-06f3-48da-83ea-b5e38e8a165c-audit-log\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.690925 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8008348-06f3-48da-83ea-b5e38e8a165c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.694936 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8008348-06f3-48da-83ea-b5e38e8a165c-client-ca-bundle\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.697923 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c8008348-06f3-48da-83ea-b5e38e8a165c-secret-metrics-client-certs\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.699456 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c8008348-06f3-48da-83ea-b5e38e8a165c-secret-metrics-server-tls\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.705225 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbwx8\" (UniqueName: \"kubernetes.io/projected/c8008348-06f3-48da-83ea-b5e38e8a165c-kube-api-access-wbwx8\") pod \"metrics-server-55978c67c-dxgvg\" (UID: \"c8008348-06f3-48da-83ea-b5e38e8a165c\") " pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.766540 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.809942 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-756445ddc-k9jkf"] Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.811078 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.818396 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.818436 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.823829 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-756445ddc-k9jkf"] Feb 14 13:59:32 crc kubenswrapper[4750]: I0214 13:59:32.998239 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/35573e5e-9fcc-4452-ad54-2b2260250a6e-monitoring-plugin-cert\") pod \"monitoring-plugin-756445ddc-k9jkf\" (UID: \"35573e5e-9fcc-4452-ad54-2b2260250a6e\") " pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.099927 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/35573e5e-9fcc-4452-ad54-2b2260250a6e-monitoring-plugin-cert\") pod \"monitoring-plugin-756445ddc-k9jkf\" (UID: \"35573e5e-9fcc-4452-ad54-2b2260250a6e\") " pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.111869 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c8cfb5cfd-g8zdh"] Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.113494 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/35573e5e-9fcc-4452-ad54-2b2260250a6e-monitoring-plugin-cert\") pod \"monitoring-plugin-756445ddc-k9jkf\" (UID: \"35573e5e-9fcc-4452-ad54-2b2260250a6e\") " pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" Feb 14 13:59:33 crc kubenswrapper[4750]: W0214 13:59:33.119963 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7876ddbf_401e_42e3_887a_daf2787a7dd1.slice/crio-815627b2e2ef5894f6a5ade687478de466858e51440f664263444fa64da6c120 WatchSource:0}: Error finding container 815627b2e2ef5894f6a5ade687478de466858e51440f664263444fa64da6c120: Status 404 returned error can't find the container with id 815627b2e2ef5894f6a5ade687478de466858e51440f664263444fa64da6c120 Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.169096 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.297928 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-55978c67c-dxgvg"] Feb 14 13:59:33 crc kubenswrapper[4750]: W0214 13:59:33.313211 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8008348_06f3_48da_83ea_b5e38e8a165c.slice/crio-2643a86b109ec4444c45fb9d363e6e37b676d5ab2b45087d70928bb3d21f7059 WatchSource:0}: Error finding container 2643a86b109ec4444c45fb9d363e6e37b676d5ab2b45087d70928bb3d21f7059: Status 404 returned error can't find the container with id 2643a86b109ec4444c45fb9d363e6e37b676d5ab2b45087d70928bb3d21f7059 Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.364371 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.366332 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.372480 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.372496 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.372498 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.372739 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-cl0fq9jn03hvp" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.372782 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.372789 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.372907 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.372879 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.372974 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-zk2lw" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.373004 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.373247 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.375187 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.387490 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.391463 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.510770 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.510814 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.510839 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511062 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511081 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511167 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511189 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-web-config\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511205 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511231 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511278 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511297 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2608f3c-7a28-4f6d-ac60-793dab30ff66-config-out\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511314 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2608f3c-7a28-4f6d-ac60-793dab30ff66-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511333 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511379 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-config\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511409 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e2608f3c-7a28-4f6d-ac60-793dab30ff66-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511431 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511450 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qst27\" (UniqueName: \"kubernetes.io/projected/e2608f3c-7a28-4f6d-ac60-793dab30ff66-kube-api-access-qst27\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.511470 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612482 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612527 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612550 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612574 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612594 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612615 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612634 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-web-config\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612649 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612678 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612706 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612723 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2608f3c-7a28-4f6d-ac60-793dab30ff66-config-out\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612739 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2608f3c-7a28-4f6d-ac60-793dab30ff66-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612753 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612786 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-config\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612803 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e2608f3c-7a28-4f6d-ac60-793dab30ff66-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612818 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612840 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qst27\" (UniqueName: \"kubernetes.io/projected/e2608f3c-7a28-4f6d-ac60-793dab30ff66-kube-api-access-qst27\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.612857 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.613886 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.615824 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.617264 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.618925 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-web-config\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.620130 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.620415 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.620527 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e2608f3c-7a28-4f6d-ac60-793dab30ff66-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.621915 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.622977 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.623228 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2608f3c-7a28-4f6d-ac60-793dab30ff66-config-out\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.623314 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.624018 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-config\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.624487 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-756445ddc-k9jkf"] Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.625921 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.627286 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2608f3c-7a28-4f6d-ac60-793dab30ff66-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.629272 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.630656 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e2608f3c-7a28-4f6d-ac60-793dab30ff66-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.630689 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2608f3c-7a28-4f6d-ac60-793dab30ff66-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: W0214 13:59:33.640490 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35573e5e_9fcc_4452_ad54_2b2260250a6e.slice/crio-06e2a33defba198bd28f9194c3e4afecdbbdff328a392a0434d375c6b44695a1 WatchSource:0}: Error finding container 06e2a33defba198bd28f9194c3e4afecdbbdff328a392a0434d375c6b44695a1: Status 404 returned error can't find the container with id 06e2a33defba198bd28f9194c3e4afecdbbdff328a392a0434d375c6b44695a1 Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.642512 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qst27\" (UniqueName: \"kubernetes.io/projected/e2608f3c-7a28-4f6d-ac60-793dab30ff66-kube-api-access-qst27\") pod \"prometheus-k8s-0\" (UID: \"e2608f3c-7a28-4f6d-ac60-793dab30ff66\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.691459 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.837467 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" event={"ID":"c8008348-06f3-48da-83ea-b5e38e8a165c","Type":"ContainerStarted","Data":"2643a86b109ec4444c45fb9d363e6e37b676d5ab2b45087d70928bb3d21f7059"} Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.840471 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c8cfb5cfd-g8zdh" event={"ID":"7876ddbf-401e-42e3-887a-daf2787a7dd1","Type":"ContainerStarted","Data":"3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1"} Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.840491 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c8cfb5cfd-g8zdh" event={"ID":"7876ddbf-401e-42e3-887a-daf2787a7dd1","Type":"ContainerStarted","Data":"815627b2e2ef5894f6a5ade687478de466858e51440f664263444fa64da6c120"} Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.846147 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" event={"ID":"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5","Type":"ContainerStarted","Data":"2f66692bb6a467691c17a048c94065e26bb94189aa7c70ed5906316a88830be8"} Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.846171 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" event={"ID":"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5","Type":"ContainerStarted","Data":"cd81e233b357efc69b1afb7065c97ad42b5308dbb3b42e1652ffb67d09749131"} Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.846181 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" event={"ID":"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5","Type":"ContainerStarted","Data":"b453f2fc4fd09a9243a215615e6593f17850bc48190fa893e91791603c216ae1"} Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.847427 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" event={"ID":"35573e5e-9fcc-4452-ad54-2b2260250a6e","Type":"ContainerStarted","Data":"06e2a33defba198bd28f9194c3e4afecdbbdff328a392a0434d375c6b44695a1"} Feb 14 13:59:33 crc kubenswrapper[4750]: I0214 13:59:33.864941 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c8cfb5cfd-g8zdh" podStartSLOduration=2.864924372 podStartE2EDuration="2.864924372s" podCreationTimestamp="2026-02-14 13:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 13:59:33.860745429 +0000 UTC m=+445.886734910" watchObservedRunningTime="2026-02-14 13:59:33.864924372 +0000 UTC m=+445.890913853" Feb 14 13:59:34 crc kubenswrapper[4750]: I0214 13:59:34.137835 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 13:59:34 crc kubenswrapper[4750]: I0214 13:59:34.853916 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2608f3c-7a28-4f6d-ac60-793dab30ff66","Type":"ContainerStarted","Data":"e3aa9b1c0a1919684a2dc81a468650945dea96620acffce4320421c270018ee4"} Feb 14 13:59:35 crc kubenswrapper[4750]: I0214 13:59:35.895038 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2608f3c-7a28-4f6d-ac60-793dab30ff66","Type":"ContainerStarted","Data":"6b40f9be9d9d8fec8064e1a92407a49f57ccb8344cdd2a4c24dfeadfeab399b3"} Feb 14 13:59:35 crc kubenswrapper[4750]: I0214 13:59:35.897603 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b16c32a-6002-4d87-8813-af7f0319a759","Type":"ContainerStarted","Data":"7f455d1119456b9aad0ed8a610e4625f1990b615ae16f991c74f2d5f05ddf1dc"} Feb 14 13:59:35 crc kubenswrapper[4750]: I0214 13:59:35.901034 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" event={"ID":"c8008348-06f3-48da-83ea-b5e38e8a165c","Type":"ContainerStarted","Data":"6cb853418b91af6c54dbc852d02eec397a525a8dc9ca7093b14f9ae770ecf3e8"} Feb 14 13:59:35 crc kubenswrapper[4750]: I0214 13:59:35.905309 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" event={"ID":"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5","Type":"ContainerStarted","Data":"b875d24f6de27feb953f799856004466bf213ff0b53d7d9044ac2827ad6ff14b"} Feb 14 13:59:35 crc kubenswrapper[4750]: I0214 13:59:35.912200 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" event={"ID":"35573e5e-9fcc-4452-ad54-2b2260250a6e","Type":"ContainerStarted","Data":"e81ff0c6d73c6eef4530c1685bbb90ea92d167130279e6444e9f0619e3810a76"} Feb 14 13:59:35 crc kubenswrapper[4750]: I0214 13:59:35.912988 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" Feb 14 13:59:35 crc kubenswrapper[4750]: I0214 13:59:35.921400 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" Feb 14 13:59:35 crc kubenswrapper[4750]: I0214 13:59:35.952753 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-756445ddc-k9jkf" podStartSLOduration=1.901574223 podStartE2EDuration="3.952735677s" podCreationTimestamp="2026-02-14 13:59:32 +0000 UTC" firstStartedPulling="2026-02-14 13:59:33.643415841 +0000 UTC m=+445.669405332" lastFinishedPulling="2026-02-14 13:59:35.694577305 +0000 UTC m=+447.720566786" observedRunningTime="2026-02-14 13:59:35.944923565 +0000 UTC m=+447.970913056" watchObservedRunningTime="2026-02-14 13:59:35.952735677 +0000 UTC m=+447.978725158" Feb 14 13:59:35 crc kubenswrapper[4750]: I0214 13:59:35.965580 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" podStartSLOduration=1.630632136 podStartE2EDuration="3.965557214s" podCreationTimestamp="2026-02-14 13:59:32 +0000 UTC" firstStartedPulling="2026-02-14 13:59:33.320001924 +0000 UTC m=+445.345991405" lastFinishedPulling="2026-02-14 13:59:35.654927002 +0000 UTC m=+447.680916483" observedRunningTime="2026-02-14 13:59:35.962420029 +0000 UTC m=+447.988409510" watchObservedRunningTime="2026-02-14 13:59:35.965557214 +0000 UTC m=+447.991546705" Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.926574 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" event={"ID":"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5","Type":"ContainerStarted","Data":"1540b87e62ea7c23d2adfd2c93a9d8c9e1e154d1f8d0482d25330859aef94ff1"} Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.927068 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" event={"ID":"d0bbbe40-b0b1-4ee8-a52c-ff51431988f5","Type":"ContainerStarted","Data":"a5d42cd98c9b48bda7e5725c05852812c615b8b0a90b8fe0798ebde06c4d9501"} Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.927142 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.929751 4750 generic.go:334] "Generic (PLEG): container finished" podID="e2608f3c-7a28-4f6d-ac60-793dab30ff66" containerID="6b40f9be9d9d8fec8064e1a92407a49f57ccb8344cdd2a4c24dfeadfeab399b3" exitCode=0 Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.929810 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2608f3c-7a28-4f6d-ac60-793dab30ff66","Type":"ContainerDied","Data":"6b40f9be9d9d8fec8064e1a92407a49f57ccb8344cdd2a4c24dfeadfeab399b3"} Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.939514 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b16c32a-6002-4d87-8813-af7f0319a759","Type":"ContainerStarted","Data":"04cf37b69682ac6ca46886410a23234a3cd4d1ef4ebee112a4b4a201a6019a09"} Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.939588 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b16c32a-6002-4d87-8813-af7f0319a759","Type":"ContainerStarted","Data":"2efb7eab2864ce484c20dcefa0ccf6b6ffccf917ee326ec4d2f7be93ff3b5e05"} Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.939618 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b16c32a-6002-4d87-8813-af7f0319a759","Type":"ContainerStarted","Data":"a8b242c96f566820519f3b3d38727520d674605af52c8fb310470ba5d01b60e2"} Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.939642 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b16c32a-6002-4d87-8813-af7f0319a759","Type":"ContainerStarted","Data":"de50bef0df35bd3f70c56392776351dce983c50c50950ec359ee98b4a9c34a7f"} Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.939664 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b16c32a-6002-4d87-8813-af7f0319a759","Type":"ContainerStarted","Data":"33b80c9798ebf0833834b3011149877258549ecf8a42debc76a110066509cfed"} Feb 14 13:59:36 crc kubenswrapper[4750]: I0214 13:59:36.993514 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" podStartSLOduration=2.651706845 podStartE2EDuration="7.993493325s" podCreationTimestamp="2026-02-14 13:59:29 +0000 UTC" firstStartedPulling="2026-02-14 13:59:30.320988955 +0000 UTC m=+442.346978436" lastFinishedPulling="2026-02-14 13:59:35.662775425 +0000 UTC m=+447.688764916" observedRunningTime="2026-02-14 13:59:36.966174696 +0000 UTC m=+448.992164227" watchObservedRunningTime="2026-02-14 13:59:36.993493325 +0000 UTC m=+449.019482816" Feb 14 13:59:37 crc kubenswrapper[4750]: I0214 13:59:37.016317 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.251386717 podStartE2EDuration="9.016300491s" podCreationTimestamp="2026-02-14 13:59:28 +0000 UTC" firstStartedPulling="2026-02-14 13:59:29.851469596 +0000 UTC m=+441.877459077" lastFinishedPulling="2026-02-14 13:59:35.61638337 +0000 UTC m=+447.642372851" observedRunningTime="2026-02-14 13:59:37.010035752 +0000 UTC m=+449.036025243" watchObservedRunningTime="2026-02-14 13:59:37.016300491 +0000 UTC m=+449.042289982" Feb 14 13:59:39 crc kubenswrapper[4750]: I0214 13:59:39.451971 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-579d545c98-2ddxx" Feb 14 13:59:40 crc kubenswrapper[4750]: I0214 13:59:40.968399 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2608f3c-7a28-4f6d-ac60-793dab30ff66","Type":"ContainerStarted","Data":"1ba817b5c74408b0501037acdd8e4464387f3102f43df4f9a51151b84a2f3053"} Feb 14 13:59:40 crc kubenswrapper[4750]: I0214 13:59:40.968721 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2608f3c-7a28-4f6d-ac60-793dab30ff66","Type":"ContainerStarted","Data":"69cf77697c99c584d9912c617bfab2e138a0c300cbc36dc9e102cfe31e0f38c8"} Feb 14 13:59:40 crc kubenswrapper[4750]: I0214 13:59:40.968735 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2608f3c-7a28-4f6d-ac60-793dab30ff66","Type":"ContainerStarted","Data":"09bab9c15ba0b67a94ebb603d04dcb5bb1a6651e4ea840eaac142e2e0734d61c"} Feb 14 13:59:40 crc kubenswrapper[4750]: I0214 13:59:40.968747 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2608f3c-7a28-4f6d-ac60-793dab30ff66","Type":"ContainerStarted","Data":"10e8954a23741ad4744dc1dd9d079167b403afd861ef6b36814d02abf33b3b07"} Feb 14 13:59:40 crc kubenswrapper[4750]: I0214 13:59:40.968757 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2608f3c-7a28-4f6d-ac60-793dab30ff66","Type":"ContainerStarted","Data":"0ada8b4493e591d93ff5b301a3830c97d10a746670abd08b101bc9fc0224d616"} Feb 14 13:59:40 crc kubenswrapper[4750]: I0214 13:59:40.968766 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2608f3c-7a28-4f6d-ac60-793dab30ff66","Type":"ContainerStarted","Data":"f82ad48054122ad3c5c255f0770d9a043e5a40704ebfd1d288fb686a2062d916"} Feb 14 13:59:42 crc kubenswrapper[4750]: I0214 13:59:42.185523 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:42 crc kubenswrapper[4750]: I0214 13:59:42.186032 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:42 crc kubenswrapper[4750]: I0214 13:59:42.193376 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:42 crc kubenswrapper[4750]: I0214 13:59:42.222692 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=6.094505917 podStartE2EDuration="9.222663929s" podCreationTimestamp="2026-02-14 13:59:33 +0000 UTC" firstStartedPulling="2026-02-14 13:59:36.932650829 +0000 UTC m=+448.958640350" lastFinishedPulling="2026-02-14 13:59:40.060808881 +0000 UTC m=+452.086798362" observedRunningTime="2026-02-14 13:59:41.004539505 +0000 UTC m=+453.030529006" watchObservedRunningTime="2026-02-14 13:59:42.222663929 +0000 UTC m=+454.248653460" Feb 14 13:59:42 crc kubenswrapper[4750]: I0214 13:59:42.991803 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 13:59:43 crc kubenswrapper[4750]: I0214 13:59:43.076158 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qccxx"] Feb 14 13:59:43 crc kubenswrapper[4750]: I0214 13:59:43.692773 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 13:59:52 crc kubenswrapper[4750]: I0214 13:59:52.768033 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 13:59:52 crc kubenswrapper[4750]: I0214 13:59:52.772301 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.185393 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq"] Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.187529 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.190868 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.191822 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.200367 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq"] Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.331892 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/160d1cb0-fb48-404e-aec8-e1e38de67897-secret-volume\") pod \"collect-profiles-29517960-4gmzq\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.332003 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/160d1cb0-fb48-404e-aec8-e1e38de67897-config-volume\") pod \"collect-profiles-29517960-4gmzq\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.332050 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsdm\" (UniqueName: \"kubernetes.io/projected/160d1cb0-fb48-404e-aec8-e1e38de67897-kube-api-access-kwsdm\") pod \"collect-profiles-29517960-4gmzq\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.433936 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/160d1cb0-fb48-404e-aec8-e1e38de67897-secret-volume\") pod \"collect-profiles-29517960-4gmzq\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.434248 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/160d1cb0-fb48-404e-aec8-e1e38de67897-config-volume\") pod \"collect-profiles-29517960-4gmzq\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.434376 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsdm\" (UniqueName: \"kubernetes.io/projected/160d1cb0-fb48-404e-aec8-e1e38de67897-kube-api-access-kwsdm\") pod \"collect-profiles-29517960-4gmzq\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.435569 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/160d1cb0-fb48-404e-aec8-e1e38de67897-config-volume\") pod \"collect-profiles-29517960-4gmzq\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.448000 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/160d1cb0-fb48-404e-aec8-e1e38de67897-secret-volume\") pod \"collect-profiles-29517960-4gmzq\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.451969 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsdm\" (UniqueName: \"kubernetes.io/projected/160d1cb0-fb48-404e-aec8-e1e38de67897-kube-api-access-kwsdm\") pod \"collect-profiles-29517960-4gmzq\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.505308 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:00 crc kubenswrapper[4750]: I0214 14:00:00.988934 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq"] Feb 14 14:00:01 crc kubenswrapper[4750]: I0214 14:00:01.118836 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" event={"ID":"160d1cb0-fb48-404e-aec8-e1e38de67897","Type":"ContainerStarted","Data":"8cd0d0d14fea0e21921f4e4b8d26cf0b3e7742eb2910f15dff96c718e073bd86"} Feb 14 14:00:02 crc kubenswrapper[4750]: I0214 14:00:02.126276 4750 generic.go:334] "Generic (PLEG): container finished" podID="160d1cb0-fb48-404e-aec8-e1e38de67897" containerID="ff6bce62424cbdb81d86c990608e2c8fe724b78495b862c8fc586e0ec59ed440" exitCode=0 Feb 14 14:00:02 crc kubenswrapper[4750]: I0214 14:00:02.126336 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" event={"ID":"160d1cb0-fb48-404e-aec8-e1e38de67897","Type":"ContainerDied","Data":"ff6bce62424cbdb81d86c990608e2c8fe724b78495b862c8fc586e0ec59ed440"} Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.391429 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.479694 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/160d1cb0-fb48-404e-aec8-e1e38de67897-config-volume\") pod \"160d1cb0-fb48-404e-aec8-e1e38de67897\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.479962 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/160d1cb0-fb48-404e-aec8-e1e38de67897-secret-volume\") pod \"160d1cb0-fb48-404e-aec8-e1e38de67897\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.480104 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwsdm\" (UniqueName: \"kubernetes.io/projected/160d1cb0-fb48-404e-aec8-e1e38de67897-kube-api-access-kwsdm\") pod \"160d1cb0-fb48-404e-aec8-e1e38de67897\" (UID: \"160d1cb0-fb48-404e-aec8-e1e38de67897\") " Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.480500 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/160d1cb0-fb48-404e-aec8-e1e38de67897-config-volume" (OuterVolumeSpecName: "config-volume") pod "160d1cb0-fb48-404e-aec8-e1e38de67897" (UID: "160d1cb0-fb48-404e-aec8-e1e38de67897"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.480742 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/160d1cb0-fb48-404e-aec8-e1e38de67897-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.488343 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160d1cb0-fb48-404e-aec8-e1e38de67897-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "160d1cb0-fb48-404e-aec8-e1e38de67897" (UID: "160d1cb0-fb48-404e-aec8-e1e38de67897"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.489065 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160d1cb0-fb48-404e-aec8-e1e38de67897-kube-api-access-kwsdm" (OuterVolumeSpecName: "kube-api-access-kwsdm") pod "160d1cb0-fb48-404e-aec8-e1e38de67897" (UID: "160d1cb0-fb48-404e-aec8-e1e38de67897"). InnerVolumeSpecName "kube-api-access-kwsdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.581762 4750 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/160d1cb0-fb48-404e-aec8-e1e38de67897-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:03 crc kubenswrapper[4750]: I0214 14:00:03.581803 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwsdm\" (UniqueName: \"kubernetes.io/projected/160d1cb0-fb48-404e-aec8-e1e38de67897-kube-api-access-kwsdm\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:04 crc kubenswrapper[4750]: I0214 14:00:04.140286 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" event={"ID":"160d1cb0-fb48-404e-aec8-e1e38de67897","Type":"ContainerDied","Data":"8cd0d0d14fea0e21921f4e4b8d26cf0b3e7742eb2910f15dff96c718e073bd86"} Feb 14 14:00:04 crc kubenswrapper[4750]: I0214 14:00:04.140329 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cd0d0d14fea0e21921f4e4b8d26cf0b3e7742eb2910f15dff96c718e073bd86" Feb 14 14:00:04 crc kubenswrapper[4750]: I0214 14:00:04.140327 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.137200 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qccxx" podUID="6adb3cca-d5b3-4216-868b-73086725a3ed" containerName="console" containerID="cri-o://11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430" gracePeriod=15 Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.563956 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qccxx_6adb3cca-d5b3-4216-868b-73086725a3ed/console/0.log" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.564306 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qccxx" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.657974 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njrjw\" (UniqueName: \"kubernetes.io/projected/6adb3cca-d5b3-4216-868b-73086725a3ed-kube-api-access-njrjw\") pod \"6adb3cca-d5b3-4216-868b-73086725a3ed\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.658028 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-oauth-serving-cert\") pod \"6adb3cca-d5b3-4216-868b-73086725a3ed\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.658077 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-service-ca\") pod \"6adb3cca-d5b3-4216-868b-73086725a3ed\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.658131 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-trusted-ca-bundle\") pod \"6adb3cca-d5b3-4216-868b-73086725a3ed\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.658167 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-console-config\") pod \"6adb3cca-d5b3-4216-868b-73086725a3ed\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.658254 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-oauth-config\") pod \"6adb3cca-d5b3-4216-868b-73086725a3ed\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.658295 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-serving-cert\") pod \"6adb3cca-d5b3-4216-868b-73086725a3ed\" (UID: \"6adb3cca-d5b3-4216-868b-73086725a3ed\") " Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.659470 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-console-config" (OuterVolumeSpecName: "console-config") pod "6adb3cca-d5b3-4216-868b-73086725a3ed" (UID: "6adb3cca-d5b3-4216-868b-73086725a3ed"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.659536 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6adb3cca-d5b3-4216-868b-73086725a3ed" (UID: "6adb3cca-d5b3-4216-868b-73086725a3ed"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.659656 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-service-ca" (OuterVolumeSpecName: "service-ca") pod "6adb3cca-d5b3-4216-868b-73086725a3ed" (UID: "6adb3cca-d5b3-4216-868b-73086725a3ed"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.659590 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6adb3cca-d5b3-4216-868b-73086725a3ed" (UID: "6adb3cca-d5b3-4216-868b-73086725a3ed"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.660241 4750 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.660282 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.660304 4750 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.660322 4750 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6adb3cca-d5b3-4216-868b-73086725a3ed-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.664558 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6adb3cca-d5b3-4216-868b-73086725a3ed-kube-api-access-njrjw" (OuterVolumeSpecName: "kube-api-access-njrjw") pod "6adb3cca-d5b3-4216-868b-73086725a3ed" (UID: "6adb3cca-d5b3-4216-868b-73086725a3ed"). InnerVolumeSpecName "kube-api-access-njrjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.664649 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6adb3cca-d5b3-4216-868b-73086725a3ed" (UID: "6adb3cca-d5b3-4216-868b-73086725a3ed"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.665078 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6adb3cca-d5b3-4216-868b-73086725a3ed" (UID: "6adb3cca-d5b3-4216-868b-73086725a3ed"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.761507 4750 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.761537 4750 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6adb3cca-d5b3-4216-868b-73086725a3ed-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:08 crc kubenswrapper[4750]: I0214 14:00:08.761546 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njrjw\" (UniqueName: \"kubernetes.io/projected/6adb3cca-d5b3-4216-868b-73086725a3ed-kube-api-access-njrjw\") on node \"crc\" DevicePath \"\"" Feb 14 14:00:09 crc kubenswrapper[4750]: I0214 14:00:09.170870 4750 generic.go:334] "Generic (PLEG): container finished" podID="6adb3cca-d5b3-4216-868b-73086725a3ed" containerID="11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430" exitCode=2 Feb 14 14:00:09 crc kubenswrapper[4750]: I0214 14:00:09.170914 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qccxx" event={"ID":"6adb3cca-d5b3-4216-868b-73086725a3ed","Type":"ContainerDied","Data":"11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430"} Feb 14 14:00:09 crc kubenswrapper[4750]: I0214 14:00:09.170923 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qccxx" Feb 14 14:00:09 crc kubenswrapper[4750]: I0214 14:00:09.170953 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qccxx" event={"ID":"6adb3cca-d5b3-4216-868b-73086725a3ed","Type":"ContainerDied","Data":"815e89e2b6b2e7e0a00b768861f2f64551852a16f69f586f44d74396dcfae93c"} Feb 14 14:00:09 crc kubenswrapper[4750]: I0214 14:00:09.170974 4750 scope.go:117] "RemoveContainer" containerID="11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430" Feb 14 14:00:09 crc kubenswrapper[4750]: I0214 14:00:09.203997 4750 scope.go:117] "RemoveContainer" containerID="11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430" Feb 14 14:00:09 crc kubenswrapper[4750]: E0214 14:00:09.204580 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430\": container with ID starting with 11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430 not found: ID does not exist" containerID="11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430" Feb 14 14:00:09 crc kubenswrapper[4750]: I0214 14:00:09.204617 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430"} err="failed to get container status \"11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430\": rpc error: code = NotFound desc = could not find container \"11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430\": container with ID starting with 11e36bef7455b18e81bbedb6ab113e8e95f56f572ec57685085c3728fb4c8430 not found: ID does not exist" Feb 14 14:00:09 crc kubenswrapper[4750]: I0214 14:00:09.226649 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qccxx"] Feb 14 14:00:09 crc kubenswrapper[4750]: I0214 14:00:09.230902 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qccxx"] Feb 14 14:00:10 crc kubenswrapper[4750]: I0214 14:00:10.748865 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6adb3cca-d5b3-4216-868b-73086725a3ed" path="/var/lib/kubelet/pods/6adb3cca-d5b3-4216-868b-73086725a3ed/volumes" Feb 14 14:00:12 crc kubenswrapper[4750]: I0214 14:00:12.779240 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 14:00:12 crc kubenswrapper[4750]: I0214 14:00:12.786862 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-55978c67c-dxgvg" Feb 14 14:00:33 crc kubenswrapper[4750]: I0214 14:00:33.692768 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 14:00:33 crc kubenswrapper[4750]: I0214 14:00:33.746640 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 14:00:34 crc kubenswrapper[4750]: I0214 14:00:34.409025 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 14:00:35 crc kubenswrapper[4750]: I0214 14:00:35.335710 4750 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod160d1cb0-fb48-404e-aec8-e1e38de67897"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod160d1cb0-fb48-404e-aec8-e1e38de67897] : Timed out while waiting for systemd to remove kubepods-burstable-pod160d1cb0_fb48_404e_aec8_e1e38de67897.slice" Feb 14 14:00:35 crc kubenswrapper[4750]: E0214 14:00:35.335815 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod160d1cb0-fb48-404e-aec8-e1e38de67897] : unable to destroy cgroup paths for cgroup [kubepods burstable pod160d1cb0-fb48-404e-aec8-e1e38de67897] : Timed out while waiting for systemd to remove kubepods-burstable-pod160d1cb0_fb48_404e_aec8_e1e38de67897.slice" pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" podUID="160d1cb0-fb48-404e-aec8-e1e38de67897" Feb 14 14:00:35 crc kubenswrapper[4750]: I0214 14:00:35.369658 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.046422 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-85c54b6d88-xgthb"] Feb 14 14:00:45 crc kubenswrapper[4750]: E0214 14:00:45.048868 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adb3cca-d5b3-4216-868b-73086725a3ed" containerName="console" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.048931 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adb3cca-d5b3-4216-868b-73086725a3ed" containerName="console" Feb 14 14:00:45 crc kubenswrapper[4750]: E0214 14:00:45.048957 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160d1cb0-fb48-404e-aec8-e1e38de67897" containerName="collect-profiles" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.048975 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="160d1cb0-fb48-404e-aec8-e1e38de67897" containerName="collect-profiles" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.049270 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="160d1cb0-fb48-404e-aec8-e1e38de67897" containerName="collect-profiles" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.049330 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6adb3cca-d5b3-4216-868b-73086725a3ed" containerName="console" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.050284 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.052561 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85c54b6d88-xgthb"] Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.248574 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-service-ca\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.248708 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-console-config\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.248780 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-oauth-serving-cert\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.248851 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chhv\" (UniqueName: \"kubernetes.io/projected/21250970-b4be-492f-bd89-22784c31b8fa-kube-api-access-4chhv\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.248928 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-serving-cert\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.248979 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-trusted-ca-bundle\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.249018 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-oauth-config\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.349951 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-console-config\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.350055 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-oauth-serving-cert\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.350151 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4chhv\" (UniqueName: \"kubernetes.io/projected/21250970-b4be-492f-bd89-22784c31b8fa-kube-api-access-4chhv\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.350208 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-serving-cert\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.350255 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-trusted-ca-bundle\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.350290 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-oauth-config\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.350344 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-service-ca\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.351584 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-console-config\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.351782 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-service-ca\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.351847 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-oauth-serving-cert\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.352493 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-trusted-ca-bundle\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.359297 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-oauth-config\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.359575 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-serving-cert\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.380020 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chhv\" (UniqueName: \"kubernetes.io/projected/21250970-b4be-492f-bd89-22784c31b8fa-kube-api-access-4chhv\") pod \"console-85c54b6d88-xgthb\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.386838 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:45 crc kubenswrapper[4750]: I0214 14:00:45.675867 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85c54b6d88-xgthb"] Feb 14 14:00:46 crc kubenswrapper[4750]: I0214 14:00:46.467441 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85c54b6d88-xgthb" event={"ID":"21250970-b4be-492f-bd89-22784c31b8fa","Type":"ContainerStarted","Data":"0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b"} Feb 14 14:00:46 crc kubenswrapper[4750]: I0214 14:00:46.467769 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85c54b6d88-xgthb" event={"ID":"21250970-b4be-492f-bd89-22784c31b8fa","Type":"ContainerStarted","Data":"647020d28e8c577b2f42934ee46e1fabea573ccf8875c7d4c890807fedb9618a"} Feb 14 14:00:46 crc kubenswrapper[4750]: I0214 14:00:46.488525 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85c54b6d88-xgthb" podStartSLOduration=1.488510933 podStartE2EDuration="1.488510933s" podCreationTimestamp="2026-02-14 14:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:00:46.486395912 +0000 UTC m=+518.512385403" watchObservedRunningTime="2026-02-14 14:00:46.488510933 +0000 UTC m=+518.514500414" Feb 14 14:00:55 crc kubenswrapper[4750]: I0214 14:00:55.387078 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:55 crc kubenswrapper[4750]: I0214 14:00:55.387991 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:55 crc kubenswrapper[4750]: I0214 14:00:55.395905 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:55 crc kubenswrapper[4750]: I0214 14:00:55.546385 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:00:55 crc kubenswrapper[4750]: I0214 14:00:55.638364 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c8cfb5cfd-g8zdh"] Feb 14 14:01:09 crc kubenswrapper[4750]: I0214 14:01:09.111931 4750 scope.go:117] "RemoveContainer" containerID="d52f5571f12c8fa64628da8e4038ee5bbd4d96331c199c685ab5ee8699f6fe14" Feb 14 14:01:20 crc kubenswrapper[4750]: I0214 14:01:20.698236 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7c8cfb5cfd-g8zdh" podUID="7876ddbf-401e-42e3-887a-daf2787a7dd1" containerName="console" containerID="cri-o://3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1" gracePeriod=15 Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.719050 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c8cfb5cfd-g8zdh_7876ddbf-401e-42e3-887a-daf2787a7dd1/console/0.log" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.719407 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.758962 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c8cfb5cfd-g8zdh_7876ddbf-401e-42e3-887a-daf2787a7dd1/console/0.log" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.759223 4750 generic.go:334] "Generic (PLEG): container finished" podID="7876ddbf-401e-42e3-887a-daf2787a7dd1" containerID="3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1" exitCode=2 Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.759279 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c8cfb5cfd-g8zdh" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.759282 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c8cfb5cfd-g8zdh" event={"ID":"7876ddbf-401e-42e3-887a-daf2787a7dd1","Type":"ContainerDied","Data":"3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1"} Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.759450 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c8cfb5cfd-g8zdh" event={"ID":"7876ddbf-401e-42e3-887a-daf2787a7dd1","Type":"ContainerDied","Data":"815627b2e2ef5894f6a5ade687478de466858e51440f664263444fa64da6c120"} Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.759498 4750 scope.go:117] "RemoveContainer" containerID="3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.781666 4750 scope.go:117] "RemoveContainer" containerID="3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1" Feb 14 14:01:21 crc kubenswrapper[4750]: E0214 14:01:21.782059 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1\": container with ID starting with 3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1 not found: ID does not exist" containerID="3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.782097 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1"} err="failed to get container status \"3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1\": rpc error: code = NotFound desc = could not find container \"3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1\": container with ID starting with 3273aa53274cd921e8cceb2c192180fedd7bc8c701aeb09af1fabb4d3ed35fa1 not found: ID does not exist" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.853740 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-trusted-ca-bundle\") pod \"7876ddbf-401e-42e3-887a-daf2787a7dd1\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.853846 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-serving-cert\") pod \"7876ddbf-401e-42e3-887a-daf2787a7dd1\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.853892 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-oauth-config\") pod \"7876ddbf-401e-42e3-887a-daf2787a7dd1\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.853922 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-service-ca\") pod \"7876ddbf-401e-42e3-887a-daf2787a7dd1\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.853981 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-config\") pod \"7876ddbf-401e-42e3-887a-daf2787a7dd1\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.854015 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-oauth-serving-cert\") pod \"7876ddbf-401e-42e3-887a-daf2787a7dd1\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.854050 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjs4x\" (UniqueName: \"kubernetes.io/projected/7876ddbf-401e-42e3-887a-daf2787a7dd1-kube-api-access-mjs4x\") pod \"7876ddbf-401e-42e3-887a-daf2787a7dd1\" (UID: \"7876ddbf-401e-42e3-887a-daf2787a7dd1\") " Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.856038 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-service-ca" (OuterVolumeSpecName: "service-ca") pod "7876ddbf-401e-42e3-887a-daf2787a7dd1" (UID: "7876ddbf-401e-42e3-887a-daf2787a7dd1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.856444 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7876ddbf-401e-42e3-887a-daf2787a7dd1" (UID: "7876ddbf-401e-42e3-887a-daf2787a7dd1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.856506 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-config" (OuterVolumeSpecName: "console-config") pod "7876ddbf-401e-42e3-887a-daf2787a7dd1" (UID: "7876ddbf-401e-42e3-887a-daf2787a7dd1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.857067 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7876ddbf-401e-42e3-887a-daf2787a7dd1" (UID: "7876ddbf-401e-42e3-887a-daf2787a7dd1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.859968 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7876ddbf-401e-42e3-887a-daf2787a7dd1-kube-api-access-mjs4x" (OuterVolumeSpecName: "kube-api-access-mjs4x") pod "7876ddbf-401e-42e3-887a-daf2787a7dd1" (UID: "7876ddbf-401e-42e3-887a-daf2787a7dd1"). InnerVolumeSpecName "kube-api-access-mjs4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.860615 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7876ddbf-401e-42e3-887a-daf2787a7dd1" (UID: "7876ddbf-401e-42e3-887a-daf2787a7dd1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.861805 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7876ddbf-401e-42e3-887a-daf2787a7dd1" (UID: "7876ddbf-401e-42e3-887a-daf2787a7dd1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.955663 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.955717 4750 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.955737 4750 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.955755 4750 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.955772 4750 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.955790 4750 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7876ddbf-401e-42e3-887a-daf2787a7dd1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 14:01:21 crc kubenswrapper[4750]: I0214 14:01:21.955806 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjs4x\" (UniqueName: \"kubernetes.io/projected/7876ddbf-401e-42e3-887a-daf2787a7dd1-kube-api-access-mjs4x\") on node \"crc\" DevicePath \"\"" Feb 14 14:01:22 crc kubenswrapper[4750]: I0214 14:01:22.108805 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c8cfb5cfd-g8zdh"] Feb 14 14:01:22 crc kubenswrapper[4750]: I0214 14:01:22.112849 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7c8cfb5cfd-g8zdh"] Feb 14 14:01:22 crc kubenswrapper[4750]: I0214 14:01:22.755672 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7876ddbf-401e-42e3-887a-daf2787a7dd1" path="/var/lib/kubelet/pods/7876ddbf-401e-42e3-887a-daf2787a7dd1/volumes" Feb 14 14:01:30 crc kubenswrapper[4750]: I0214 14:01:30.129426 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:01:30 crc kubenswrapper[4750]: I0214 14:01:30.130266 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:02:00 crc kubenswrapper[4750]: I0214 14:02:00.129442 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:02:00 crc kubenswrapper[4750]: I0214 14:02:00.130278 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:02:09 crc kubenswrapper[4750]: I0214 14:02:09.157808 4750 scope.go:117] "RemoveContainer" containerID="c53cf1b61ed6b75e7722bb572445e0d31610f39bd49313dd0c27838b730cb79e" Feb 14 14:02:09 crc kubenswrapper[4750]: I0214 14:02:09.180617 4750 scope.go:117] "RemoveContainer" containerID="bf4a2c153d9aef0a31213671bbd61ab3c28e40d4c685b42c92441e618ecf5cf0" Feb 14 14:02:30 crc kubenswrapper[4750]: I0214 14:02:30.129027 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:02:30 crc kubenswrapper[4750]: I0214 14:02:30.129853 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:02:30 crc kubenswrapper[4750]: I0214 14:02:30.129931 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:02:30 crc kubenswrapper[4750]: I0214 14:02:30.130961 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16d8f67a0e99f472e2aad782f2e41524ac5fa5eb8cfb90a1e3ce5626b6571b16"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:02:30 crc kubenswrapper[4750]: I0214 14:02:30.131069 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://16d8f67a0e99f472e2aad782f2e41524ac5fa5eb8cfb90a1e3ce5626b6571b16" gracePeriod=600 Feb 14 14:02:30 crc kubenswrapper[4750]: I0214 14:02:30.297708 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="16d8f67a0e99f472e2aad782f2e41524ac5fa5eb8cfb90a1e3ce5626b6571b16" exitCode=0 Feb 14 14:02:30 crc kubenswrapper[4750]: I0214 14:02:30.297776 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"16d8f67a0e99f472e2aad782f2e41524ac5fa5eb8cfb90a1e3ce5626b6571b16"} Feb 14 14:02:30 crc kubenswrapper[4750]: I0214 14:02:30.298079 4750 scope.go:117] "RemoveContainer" containerID="ed0706ee349da09831b26e5d759efb9be0265de4607faf38c0a7fea0110aee8e" Feb 14 14:02:31 crc kubenswrapper[4750]: I0214 14:02:31.310390 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"0fcac30bb0adf08b1da63eb7c690c0a5ff45982d267b4e4d61b22d035f601ad8"} Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.205543 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz"] Feb 14 14:03:23 crc kubenswrapper[4750]: E0214 14:03:23.206236 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7876ddbf-401e-42e3-887a-daf2787a7dd1" containerName="console" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.206251 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="7876ddbf-401e-42e3-887a-daf2787a7dd1" containerName="console" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.206369 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="7876ddbf-401e-42e3-887a-daf2787a7dd1" containerName="console" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.207199 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.208812 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.219911 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz"] Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.277089 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt6xw\" (UniqueName: \"kubernetes.io/projected/b9ad9eda-9ae2-4549-b367-4ae1a795e809-kube-api-access-tt6xw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.277182 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.277263 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.378251 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt6xw\" (UniqueName: \"kubernetes.io/projected/b9ad9eda-9ae2-4549-b367-4ae1a795e809-kube-api-access-tt6xw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.378323 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.378396 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.378904 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.378923 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.396429 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt6xw\" (UniqueName: \"kubernetes.io/projected/b9ad9eda-9ae2-4549-b367-4ae1a795e809-kube-api-access-tt6xw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.555004 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:23 crc kubenswrapper[4750]: I0214 14:03:23.765859 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz"] Feb 14 14:03:23 crc kubenswrapper[4750]: W0214 14:03:23.774806 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9ad9eda_9ae2_4549_b367_4ae1a795e809.slice/crio-263d03fdaa07654c995d2a50d3cc0e96682bc81ecf99e0d53036dc2e35c42eeb WatchSource:0}: Error finding container 263d03fdaa07654c995d2a50d3cc0e96682bc81ecf99e0d53036dc2e35c42eeb: Status 404 returned error can't find the container with id 263d03fdaa07654c995d2a50d3cc0e96682bc81ecf99e0d53036dc2e35c42eeb Feb 14 14:03:24 crc kubenswrapper[4750]: I0214 14:03:24.721031 4750 generic.go:334] "Generic (PLEG): container finished" podID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerID="a9da418cd8b9a630a5077305ba6a8a63f74762624ffceb5cd56f968114944b11" exitCode=0 Feb 14 14:03:24 crc kubenswrapper[4750]: I0214 14:03:24.721107 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" event={"ID":"b9ad9eda-9ae2-4549-b367-4ae1a795e809","Type":"ContainerDied","Data":"a9da418cd8b9a630a5077305ba6a8a63f74762624ffceb5cd56f968114944b11"} Feb 14 14:03:24 crc kubenswrapper[4750]: I0214 14:03:24.721188 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" event={"ID":"b9ad9eda-9ae2-4549-b367-4ae1a795e809","Type":"ContainerStarted","Data":"263d03fdaa07654c995d2a50d3cc0e96682bc81ecf99e0d53036dc2e35c42eeb"} Feb 14 14:03:24 crc kubenswrapper[4750]: I0214 14:03:24.722880 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:03:26 crc kubenswrapper[4750]: I0214 14:03:26.735376 4750 generic.go:334] "Generic (PLEG): container finished" podID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerID="d4e21e30b75454ee2f211c57b99938c94d6c2988970218e442b5cbe1dc55c590" exitCode=0 Feb 14 14:03:26 crc kubenswrapper[4750]: I0214 14:03:26.735427 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" event={"ID":"b9ad9eda-9ae2-4549-b367-4ae1a795e809","Type":"ContainerDied","Data":"d4e21e30b75454ee2f211c57b99938c94d6c2988970218e442b5cbe1dc55c590"} Feb 14 14:03:27 crc kubenswrapper[4750]: I0214 14:03:27.746756 4750 generic.go:334] "Generic (PLEG): container finished" podID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerID="8a86ed7df3c65ef2466f0a2eb6ea3e957b51acc9c665386ef4b0e63eee5c4018" exitCode=0 Feb 14 14:03:27 crc kubenswrapper[4750]: I0214 14:03:27.746838 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" event={"ID":"b9ad9eda-9ae2-4549-b367-4ae1a795e809","Type":"ContainerDied","Data":"8a86ed7df3c65ef2466f0a2eb6ea3e957b51acc9c665386ef4b0e63eee5c4018"} Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.052141 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.083043 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-bundle\") pod \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.083153 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt6xw\" (UniqueName: \"kubernetes.io/projected/b9ad9eda-9ae2-4549-b367-4ae1a795e809-kube-api-access-tt6xw\") pod \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.083190 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-util\") pod \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\" (UID: \"b9ad9eda-9ae2-4549-b367-4ae1a795e809\") " Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.085667 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-bundle" (OuterVolumeSpecName: "bundle") pod "b9ad9eda-9ae2-4549-b367-4ae1a795e809" (UID: "b9ad9eda-9ae2-4549-b367-4ae1a795e809"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.093306 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ad9eda-9ae2-4549-b367-4ae1a795e809-kube-api-access-tt6xw" (OuterVolumeSpecName: "kube-api-access-tt6xw") pod "b9ad9eda-9ae2-4549-b367-4ae1a795e809" (UID: "b9ad9eda-9ae2-4549-b367-4ae1a795e809"). InnerVolumeSpecName "kube-api-access-tt6xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.111474 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-util" (OuterVolumeSpecName: "util") pod "b9ad9eda-9ae2-4549-b367-4ae1a795e809" (UID: "b9ad9eda-9ae2-4549-b367-4ae1a795e809"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.186703 4750 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.186754 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt6xw\" (UniqueName: \"kubernetes.io/projected/b9ad9eda-9ae2-4549-b367-4ae1a795e809-kube-api-access-tt6xw\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.186776 4750 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9eda-9ae2-4549-b367-4ae1a795e809-util\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.766369 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" event={"ID":"b9ad9eda-9ae2-4549-b367-4ae1a795e809","Type":"ContainerDied","Data":"263d03fdaa07654c995d2a50d3cc0e96682bc81ecf99e0d53036dc2e35c42eeb"} Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.766663 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263d03fdaa07654c995d2a50d3cc0e96682bc81ecf99e0d53036dc2e35c42eeb" Feb 14 14:03:29 crc kubenswrapper[4750]: I0214 14:03:29.766532 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz" Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.584222 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4ct5"] Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.584866 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovn-controller" containerID="cri-o://891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88" gracePeriod=30 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.585324 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="sbdb" containerID="cri-o://31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf" gracePeriod=30 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.585378 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="nbdb" containerID="cri-o://d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90" gracePeriod=30 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.585418 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="northd" containerID="cri-o://a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d" gracePeriod=30 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.585454 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3" gracePeriod=30 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.585493 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kube-rbac-proxy-node" containerID="cri-o://69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317" gracePeriod=30 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.585532 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovn-acl-logging" containerID="cri-o://efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa" gracePeriod=30 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.644205 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" containerID="cri-o://e6df6013586eb2af879223e68fa447672996885c6ff49d92f45f4092caafce33" gracePeriod=30 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.806887 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovnkube-controller/3.log" Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.808801 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovn-acl-logging/0.log" Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.809467 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovn-controller/0.log" Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.810135 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="e6df6013586eb2af879223e68fa447672996885c6ff49d92f45f4092caafce33" exitCode=0 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.810164 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa" exitCode=143 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.810175 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88" exitCode=143 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.810199 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"e6df6013586eb2af879223e68fa447672996885c6ff49d92f45f4092caafce33"} Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.810272 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa"} Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.810285 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88"} Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.810301 4750 scope.go:117] "RemoveContainer" containerID="69a5b2247e86e66b8562cbd2e1a9dfb8bc63794fbe32c00f50ba27a203c6de71" Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.813205 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/2.log" Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.813653 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/1.log" Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.813692 4750 generic.go:334] "Generic (PLEG): container finished" podID="7475461f-e0e5-4d5e-91fd-bfe8fb575146" containerID="990256c8f115726952dc644dd7da82c2b60c46ad99f40b0f2d44563cc83e28be" exitCode=2 Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.813721 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n59sl" event={"ID":"7475461f-e0e5-4d5e-91fd-bfe8fb575146","Type":"ContainerDied","Data":"990256c8f115726952dc644dd7da82c2b60c46ad99f40b0f2d44563cc83e28be"} Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.814189 4750 scope.go:117] "RemoveContainer" containerID="990256c8f115726952dc644dd7da82c2b60c46ad99f40b0f2d44563cc83e28be" Feb 14 14:03:34 crc kubenswrapper[4750]: E0214 14:03:34.814452 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-n59sl_openshift-multus(7475461f-e0e5-4d5e-91fd-bfe8fb575146)\"" pod="openshift-multus/multus-n59sl" podUID="7475461f-e0e5-4d5e-91fd-bfe8fb575146" Feb 14 14:03:34 crc kubenswrapper[4750]: I0214 14:03:34.855913 4750 scope.go:117] "RemoveContainer" containerID="ef19f0c9b0644f70e98ef0b0da995da517fc8865a8d23a7d8de22ba253c2f300" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.824056 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovn-acl-logging/0.log" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.824752 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovn-controller/0.log" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.824996 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf" exitCode=0 Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.825012 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90" exitCode=0 Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.825021 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d" exitCode=0 Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.825029 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3" exitCode=0 Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.825036 4750 generic.go:334] "Generic (PLEG): container finished" podID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerID="69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317" exitCode=0 Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.825076 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf"} Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.825098 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90"} Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.825134 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d"} Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.825143 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3"} Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.825151 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317"} Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.827648 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/2.log" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.863372 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovn-acl-logging/0.log" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.863930 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovn-controller/0.log" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.864318 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931100 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2grmc"] Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931318 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931329 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931336 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931342 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931350 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931355 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931363 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="northd" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931369 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="northd" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931378 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovn-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931384 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovn-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931395 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerName="util" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931400 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerName="util" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931407 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerName="pull" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931413 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerName="pull" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931423 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovn-acl-logging" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931429 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovn-acl-logging" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931441 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kubecfg-setup" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931447 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kubecfg-setup" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931455 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="sbdb" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931461 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="sbdb" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931471 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerName="extract" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931477 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerName="extract" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931486 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="nbdb" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931491 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="nbdb" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931501 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kube-rbac-proxy-node" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931524 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kube-rbac-proxy-node" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931534 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931540 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931636 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ad9eda-9ae2-4549-b367-4ae1a795e809" containerName="extract" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931646 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kube-rbac-proxy-node" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931653 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="northd" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931661 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931668 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovn-acl-logging" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931675 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovn-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931682 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931690 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931697 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931707 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="sbdb" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931715 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="nbdb" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931806 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931812 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: E0214 14:03:35.931821 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931827 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.931930 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.932138 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" containerName="ovnkube-controller" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.933601 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988462 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-kubelet\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988503 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-openvswitch\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988527 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-var-lib-openvswitch\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988562 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-log-socket\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988595 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-etc-openvswitch\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988589 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988623 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97rqm\" (UniqueName: \"kubernetes.io/projected/06beb41c-7a86-45c1-85c2-c4f9543961ea-kube-api-access-97rqm\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988682 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-log-socket" (OuterVolumeSpecName: "log-socket") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988613 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988647 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988643 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988749 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-config\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988785 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-systemd-units\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988809 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-script-lib\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988833 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-env-overrides\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988835 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988875 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988896 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-node-log\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988909 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-netns\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988925 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-netd\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988940 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-bin\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988964 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovn-node-metrics-cert\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.988992 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-ovn-kubernetes\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989012 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-slash\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989034 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-systemd\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989051 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-ovn\") pod \"06beb41c-7a86-45c1-85c2-c4f9543961ea\" (UID: \"06beb41c-7a86-45c1-85c2-c4f9543961ea\") " Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989070 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989238 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-cni-netd\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989248 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989264 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-kubelet\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989293 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-var-lib-openvswitch\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989328 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-cni-bin\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989345 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39429601-05db-44cc-b39f-589bc3d35a34-ovnkube-config\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989386 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-systemd-units\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989404 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989423 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-run-systemd\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989263 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989280 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-slash" (OuterVolumeSpecName: "host-slash") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989296 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989295 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989357 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989378 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989395 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-node-log" (OuterVolumeSpecName: "node-log") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989410 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989500 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989523 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-run-ovn\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989552 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtlsl\" (UniqueName: \"kubernetes.io/projected/39429601-05db-44cc-b39f-589bc3d35a34-kube-api-access-wtlsl\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989586 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-etc-openvswitch\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989604 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39429601-05db-44cc-b39f-589bc3d35a34-env-overrides\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989661 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-node-log\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989677 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-run-ovn-kubernetes\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989692 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39429601-05db-44cc-b39f-589bc3d35a34-ovnkube-script-lib\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989709 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-run-openvswitch\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989736 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39429601-05db-44cc-b39f-589bc3d35a34-ovn-node-metrics-cert\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989757 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-slash\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989778 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-log-socket\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989798 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-run-netns\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989839 4750 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989851 4750 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989861 4750 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-node-log\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989871 4750 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989880 4750 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989889 4750 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989898 4750 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989907 4750 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-slash\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989917 4750 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989925 4750 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989932 4750 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989940 4750 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989948 4750 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-log-socket\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989955 4750 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989963 4750 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989970 4750 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:35 crc kubenswrapper[4750]: I0214 14:03:35.989978 4750 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.003108 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06beb41c-7a86-45c1-85c2-c4f9543961ea-kube-api-access-97rqm" (OuterVolumeSpecName: "kube-api-access-97rqm") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "kube-api-access-97rqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.005727 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.023055 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "06beb41c-7a86-45c1-85c2-c4f9543961ea" (UID: "06beb41c-7a86-45c1-85c2-c4f9543961ea"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091144 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-node-log\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091189 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-run-ovn-kubernetes\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091216 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39429601-05db-44cc-b39f-589bc3d35a34-ovnkube-script-lib\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091237 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-run-openvswitch\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091252 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-node-log\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091274 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39429601-05db-44cc-b39f-589bc3d35a34-ovn-node-metrics-cert\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091297 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-run-ovn-kubernetes\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091300 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-slash\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091332 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-log-socket\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091333 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-slash\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091351 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-run-netns\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091327 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-run-openvswitch\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091379 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-cni-netd\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091396 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-kubelet\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091413 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-var-lib-openvswitch\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091428 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-cni-netd\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091446 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-cni-bin\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091446 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-log-socket\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091476 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-run-netns\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091504 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-kubelet\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091491 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-cni-bin\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091463 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39429601-05db-44cc-b39f-589bc3d35a34-ovnkube-config\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091643 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-systemd-units\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091670 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091688 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-run-systemd\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091718 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-run-ovn\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091739 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtlsl\" (UniqueName: \"kubernetes.io/projected/39429601-05db-44cc-b39f-589bc3d35a34-kube-api-access-wtlsl\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091763 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-etc-openvswitch\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091782 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39429601-05db-44cc-b39f-589bc3d35a34-env-overrides\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091863 4750 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06beb41c-7a86-45c1-85c2-c4f9543961ea-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091875 4750 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06beb41c-7a86-45c1-85c2-c4f9543961ea-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091884 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97rqm\" (UniqueName: \"kubernetes.io/projected/06beb41c-7a86-45c1-85c2-c4f9543961ea-kube-api-access-97rqm\") on node \"crc\" DevicePath \"\"" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.092136 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-run-systemd\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.091468 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-var-lib-openvswitch\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.092232 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-systemd-units\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.092258 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.092303 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39429601-05db-44cc-b39f-589bc3d35a34-ovnkube-config\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.092342 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-etc-openvswitch\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.092365 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39429601-05db-44cc-b39f-589bc3d35a34-run-ovn\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.092425 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39429601-05db-44cc-b39f-589bc3d35a34-ovnkube-script-lib\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.092453 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39429601-05db-44cc-b39f-589bc3d35a34-env-overrides\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.095073 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39429601-05db-44cc-b39f-589bc3d35a34-ovn-node-metrics-cert\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.113656 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtlsl\" (UniqueName: \"kubernetes.io/projected/39429601-05db-44cc-b39f-589bc3d35a34-kube-api-access-wtlsl\") pod \"ovnkube-node-2grmc\" (UID: \"39429601-05db-44cc-b39f-589bc3d35a34\") " pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.244682 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.837619 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovn-acl-logging/0.log" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.839194 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4ct5_06beb41c-7a86-45c1-85c2-c4f9543961ea/ovn-controller/0.log" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.839812 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" event={"ID":"06beb41c-7a86-45c1-85c2-c4f9543961ea","Type":"ContainerDied","Data":"d5c18f894bba462ec6f9069917f8c96e3a8d3aa4c33293322449fdf84886c9ba"} Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.839870 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4ct5" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.839893 4750 scope.go:117] "RemoveContainer" containerID="e6df6013586eb2af879223e68fa447672996885c6ff49d92f45f4092caafce33" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.841766 4750 generic.go:334] "Generic (PLEG): container finished" podID="39429601-05db-44cc-b39f-589bc3d35a34" containerID="51fe6e865e0f1f4581feb9f9c7a491843401043b9f7fdbb966e25a3e8c36bd6d" exitCode=0 Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.841796 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerDied","Data":"51fe6e865e0f1f4581feb9f9c7a491843401043b9f7fdbb966e25a3e8c36bd6d"} Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.841950 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerStarted","Data":"4ffe3bde098ce727c66475a280cf4b2bc32a36ece776254e74142fbf16d3a7f3"} Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.869034 4750 scope.go:117] "RemoveContainer" containerID="31d797a1ad1d9800e7fe7eb3ff0473f22e945d66f1e9db25fe59796703aabdcf" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.871381 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4ct5"] Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.880303 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4ct5"] Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.909568 4750 scope.go:117] "RemoveContainer" containerID="d9830f55833fcc216f21904f5b7cd8d248a7990a8c89e03774a1a39d25471e90" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.930663 4750 scope.go:117] "RemoveContainer" containerID="a5f420c23e92090fa5f728906968c487df90a065e97a436d2323c0c6aa9a9d6d" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.957009 4750 scope.go:117] "RemoveContainer" containerID="bba373611f17ebfd79b321a52f4caab01ced58c650b98ddddaa1e8097b56e0d3" Feb 14 14:03:36 crc kubenswrapper[4750]: I0214 14:03:36.977553 4750 scope.go:117] "RemoveContainer" containerID="69e029ba25d5c60ee8c8dd7120d354c610e311000ff2d9e20a06573a51016317" Feb 14 14:03:37 crc kubenswrapper[4750]: I0214 14:03:37.004944 4750 scope.go:117] "RemoveContainer" containerID="efad4ec5085dfa88f8faac93f612d598a0d14a2e7c01c07e35b81f203b2015aa" Feb 14 14:03:37 crc kubenswrapper[4750]: I0214 14:03:37.032358 4750 scope.go:117] "RemoveContainer" containerID="891f174366fca55b912e7ff20a8341ec6440ee9afe6677bf8288dc11f2b06d88" Feb 14 14:03:37 crc kubenswrapper[4750]: I0214 14:03:37.049296 4750 scope.go:117] "RemoveContainer" containerID="dc617987fb2d8905ffc6656f0cd557c6c803aaf48eb8643252ae87ae4a682691" Feb 14 14:03:37 crc kubenswrapper[4750]: I0214 14:03:37.851535 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerStarted","Data":"a98f52d6b4fdb1209a32c7f021356b7e7849173f31809ead94f3c63d8d89f6e1"} Feb 14 14:03:37 crc kubenswrapper[4750]: I0214 14:03:37.851962 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerStarted","Data":"b7585d0e30a8758b76e25d888c4bff40c0b1870d370e9d0d38747fb81365715e"} Feb 14 14:03:37 crc kubenswrapper[4750]: I0214 14:03:37.851973 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerStarted","Data":"d10dd365e682ef55ff8c68c8111427aa64887d2f6f3a6dbe52eec315246dd979"} Feb 14 14:03:37 crc kubenswrapper[4750]: I0214 14:03:37.851984 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerStarted","Data":"201d609f51ce3afeb2a0dbf4bbb8eee13d0764f0a1f646f9818786942a01e93c"} Feb 14 14:03:37 crc kubenswrapper[4750]: I0214 14:03:37.851992 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerStarted","Data":"4e644f92445acbf6e3b22fa6f35beed979fab3fd68a7cafa22ed014e69e7eda9"} Feb 14 14:03:37 crc kubenswrapper[4750]: I0214 14:03:37.852000 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerStarted","Data":"5cda8ab3790838669f58c761050ee21bcf956c578da6794d1dc575b1a63fc266"} Feb 14 14:03:38 crc kubenswrapper[4750]: I0214 14:03:38.748522 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06beb41c-7a86-45c1-85c2-c4f9543961ea" path="/var/lib/kubelet/pods/06beb41c-7a86-45c1-85c2-c4f9543961ea/volumes" Feb 14 14:03:40 crc kubenswrapper[4750]: I0214 14:03:40.878493 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerStarted","Data":"f87bff314d04b5cf8f41ca337048f9d5a7452a844f402d2e7b3d4957e5044fe9"} Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.307824 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq"] Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.308729 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.312525 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.313136 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.313272 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-dk5n9" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.363567 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-428vt\" (UniqueName: \"kubernetes.io/projected/874d7068-1761-42b7-8e65-5ea7669259f4-kube-api-access-428vt\") pod \"obo-prometheus-operator-68bc856cb9-6hfcq\" (UID: \"874d7068-1761-42b7-8e65-5ea7669259f4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.465332 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-428vt\" (UniqueName: \"kubernetes.io/projected/874d7068-1761-42b7-8e65-5ea7669259f4-kube-api-access-428vt\") pod \"obo-prometheus-operator-68bc856cb9-6hfcq\" (UID: \"874d7068-1761-42b7-8e65-5ea7669259f4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.478012 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2"] Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.480742 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.483898 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.488234 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-sdxvc" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.492262 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz"] Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.493202 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.498240 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-428vt\" (UniqueName: \"kubernetes.io/projected/874d7068-1761-42b7-8e65-5ea7669259f4-kube-api-access-428vt\") pod \"obo-prometheus-operator-68bc856cb9-6hfcq\" (UID: \"874d7068-1761-42b7-8e65-5ea7669259f4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.566882 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75d8f617-5e52-472a-922a-88563b49d041-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2\" (UID: \"75d8f617-5e52-472a-922a-88563b49d041\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.566938 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75d8f617-5e52-472a-922a-88563b49d041-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2\" (UID: \"75d8f617-5e52-472a-922a-88563b49d041\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.567037 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f776de07-4c75-4295-839f-6e10713be326-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz\" (UID: \"f776de07-4c75-4295-839f-6e10713be326\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.567057 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f776de07-4c75-4295-839f-6e10713be326-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz\" (UID: \"f776de07-4c75-4295-839f-6e10713be326\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.620843 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-xn27s"] Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.621930 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.624083 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-s9r4b" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.624218 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.626421 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.664448 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(dae0ae55a18749cdbf8150b66a2dcfe9f6c20115a8819bc63dbcc40ee2e401f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.664523 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(dae0ae55a18749cdbf8150b66a2dcfe9f6c20115a8819bc63dbcc40ee2e401f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.664549 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(dae0ae55a18749cdbf8150b66a2dcfe9f6c20115a8819bc63dbcc40ee2e401f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.664597 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators(874d7068-1761-42b7-8e65-5ea7669259f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators(874d7068-1761-42b7-8e65-5ea7669259f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(dae0ae55a18749cdbf8150b66a2dcfe9f6c20115a8819bc63dbcc40ee2e401f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" podUID="874d7068-1761-42b7-8e65-5ea7669259f4" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.668897 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1852ee04-b190-41b5-8261-d481c237b27d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-xn27s\" (UID: \"1852ee04-b190-41b5-8261-d481c237b27d\") " pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.668954 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rk2\" (UniqueName: \"kubernetes.io/projected/1852ee04-b190-41b5-8261-d481c237b27d-kube-api-access-l5rk2\") pod \"observability-operator-59bdc8b94-xn27s\" (UID: \"1852ee04-b190-41b5-8261-d481c237b27d\") " pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.668994 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f776de07-4c75-4295-839f-6e10713be326-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz\" (UID: \"f776de07-4c75-4295-839f-6e10713be326\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.669017 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f776de07-4c75-4295-839f-6e10713be326-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz\" (UID: \"f776de07-4c75-4295-839f-6e10713be326\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.669076 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75d8f617-5e52-472a-922a-88563b49d041-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2\" (UID: \"75d8f617-5e52-472a-922a-88563b49d041\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.669105 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75d8f617-5e52-472a-922a-88563b49d041-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2\" (UID: \"75d8f617-5e52-472a-922a-88563b49d041\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.673795 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75d8f617-5e52-472a-922a-88563b49d041-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2\" (UID: \"75d8f617-5e52-472a-922a-88563b49d041\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.674729 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f776de07-4c75-4295-839f-6e10713be326-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz\" (UID: \"f776de07-4c75-4295-839f-6e10713be326\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.676163 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75d8f617-5e52-472a-922a-88563b49d041-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2\" (UID: \"75d8f617-5e52-472a-922a-88563b49d041\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.677601 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f776de07-4c75-4295-839f-6e10713be326-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz\" (UID: \"f776de07-4c75-4295-839f-6e10713be326\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.734334 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-vl469"] Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.735274 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.736866 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-8vvxc" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.770861 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1852ee04-b190-41b5-8261-d481c237b27d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-xn27s\" (UID: \"1852ee04-b190-41b5-8261-d481c237b27d\") " pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.770917 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rk2\" (UniqueName: \"kubernetes.io/projected/1852ee04-b190-41b5-8261-d481c237b27d-kube-api-access-l5rk2\") pod \"observability-operator-59bdc8b94-xn27s\" (UID: \"1852ee04-b190-41b5-8261-d481c237b27d\") " pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.770954 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f6b191a-aa81-4827-80eb-5bbbdf54eeba-openshift-service-ca\") pod \"perses-operator-5bf474d74f-vl469\" (UID: \"8f6b191a-aa81-4827-80eb-5bbbdf54eeba\") " pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.770969 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr2wc\" (UniqueName: \"kubernetes.io/projected/8f6b191a-aa81-4827-80eb-5bbbdf54eeba-kube-api-access-mr2wc\") pod \"perses-operator-5bf474d74f-vl469\" (UID: \"8f6b191a-aa81-4827-80eb-5bbbdf54eeba\") " pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.775657 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1852ee04-b190-41b5-8261-d481c237b27d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-xn27s\" (UID: \"1852ee04-b190-41b5-8261-d481c237b27d\") " pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.799898 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rk2\" (UniqueName: \"kubernetes.io/projected/1852ee04-b190-41b5-8261-d481c237b27d-kube-api-access-l5rk2\") pod \"observability-operator-59bdc8b94-xn27s\" (UID: \"1852ee04-b190-41b5-8261-d481c237b27d\") " pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.826646 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.839663 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.854393 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(a0f32bab81ce34b2c1e458df3f218f80e8c62dd72fa49dca130236010bc7ec7f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.854454 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(a0f32bab81ce34b2c1e458df3f218f80e8c62dd72fa49dca130236010bc7ec7f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.854479 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(a0f32bab81ce34b2c1e458df3f218f80e8c62dd72fa49dca130236010bc7ec7f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.854530 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators(75d8f617-5e52-472a-922a-88563b49d041)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators(75d8f617-5e52-472a-922a-88563b49d041)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(a0f32bab81ce34b2c1e458df3f218f80e8c62dd72fa49dca130236010bc7ec7f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" podUID="75d8f617-5e52-472a-922a-88563b49d041" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.866493 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(9db979b2df4de28d57b7b083e078b8b549b4871586041dda175e85c2442558fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.866607 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(9db979b2df4de28d57b7b083e078b8b549b4871586041dda175e85c2442558fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.866633 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(9db979b2df4de28d57b7b083e078b8b549b4871586041dda175e85c2442558fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.866698 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators(f776de07-4c75-4295-839f-6e10713be326)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators(f776de07-4c75-4295-839f-6e10713be326)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(9db979b2df4de28d57b7b083e078b8b549b4871586041dda175e85c2442558fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" podUID="f776de07-4c75-4295-839f-6e10713be326" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.872160 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f6b191a-aa81-4827-80eb-5bbbdf54eeba-openshift-service-ca\") pod \"perses-operator-5bf474d74f-vl469\" (UID: \"8f6b191a-aa81-4827-80eb-5bbbdf54eeba\") " pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.872217 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr2wc\" (UniqueName: \"kubernetes.io/projected/8f6b191a-aa81-4827-80eb-5bbbdf54eeba-kube-api-access-mr2wc\") pod \"perses-operator-5bf474d74f-vl469\" (UID: \"8f6b191a-aa81-4827-80eb-5bbbdf54eeba\") " pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.873061 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f6b191a-aa81-4827-80eb-5bbbdf54eeba-openshift-service-ca\") pod \"perses-operator-5bf474d74f-vl469\" (UID: \"8f6b191a-aa81-4827-80eb-5bbbdf54eeba\") " pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.888516 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr2wc\" (UniqueName: \"kubernetes.io/projected/8f6b191a-aa81-4827-80eb-5bbbdf54eeba-kube-api-access-mr2wc\") pod \"perses-operator-5bf474d74f-vl469\" (UID: \"8f6b191a-aa81-4827-80eb-5bbbdf54eeba\") " pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:41 crc kubenswrapper[4750]: I0214 14:03:41.947619 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.972597 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(45736a69f02cca113cc761231071fe4daf3f140f55f44eaa0019eee5d23f4362): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.972664 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(45736a69f02cca113cc761231071fe4daf3f140f55f44eaa0019eee5d23f4362): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.972683 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(45736a69f02cca113cc761231071fe4daf3f140f55f44eaa0019eee5d23f4362): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:41 crc kubenswrapper[4750]: E0214 14:03:41.972740 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-xn27s_openshift-operators(1852ee04-b190-41b5-8261-d481c237b27d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-xn27s_openshift-operators(1852ee04-b190-41b5-8261-d481c237b27d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(45736a69f02cca113cc761231071fe4daf3f140f55f44eaa0019eee5d23f4362): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" podUID="1852ee04-b190-41b5-8261-d481c237b27d" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.056697 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:42 crc kubenswrapper[4750]: E0214 14:03:42.081092 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(7233afee182ad3b70e544768472a36d6a2cc68743dcb29aff64b1cee7487eed8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:42 crc kubenswrapper[4750]: E0214 14:03:42.081181 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(7233afee182ad3b70e544768472a36d6a2cc68743dcb29aff64b1cee7487eed8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:42 crc kubenswrapper[4750]: E0214 14:03:42.081207 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(7233afee182ad3b70e544768472a36d6a2cc68743dcb29aff64b1cee7487eed8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:42 crc kubenswrapper[4750]: E0214 14:03:42.081261 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-vl469_openshift-operators(8f6b191a-aa81-4827-80eb-5bbbdf54eeba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-vl469_openshift-operators(8f6b191a-aa81-4827-80eb-5bbbdf54eeba)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(7233afee182ad3b70e544768472a36d6a2cc68743dcb29aff64b1cee7487eed8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-vl469" podUID="8f6b191a-aa81-4827-80eb-5bbbdf54eeba" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.901297 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" event={"ID":"39429601-05db-44cc-b39f-589bc3d35a34","Type":"ContainerStarted","Data":"1e08f80d56782cc608ce48c8035647978697c7e37a04ad3f0e086a885b50069b"} Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.901589 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.940307 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-xn27s"] Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.940391 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.940777 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.944362 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq"] Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.944433 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.944731 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.950358 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" podStartSLOduration=7.950338657 podStartE2EDuration="7.950338657s" podCreationTimestamp="2026-02-14 14:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:03:42.946717514 +0000 UTC m=+694.972707005" watchObservedRunningTime="2026-02-14 14:03:42.950338657 +0000 UTC m=+694.976328148" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.961352 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2"] Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.961458 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.961906 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.967190 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz"] Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.967282 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.967624 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:42 crc kubenswrapper[4750]: E0214 14:03:42.971373 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(ee9ca1a3102debe1389ada42930d30d73f7557375ecf134f6f4f36407781a54b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:42 crc kubenswrapper[4750]: E0214 14:03:42.971446 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(ee9ca1a3102debe1389ada42930d30d73f7557375ecf134f6f4f36407781a54b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:42 crc kubenswrapper[4750]: E0214 14:03:42.971481 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(ee9ca1a3102debe1389ada42930d30d73f7557375ecf134f6f4f36407781a54b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:42 crc kubenswrapper[4750]: E0214 14:03:42.971530 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-xn27s_openshift-operators(1852ee04-b190-41b5-8261-d481c237b27d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-xn27s_openshift-operators(1852ee04-b190-41b5-8261-d481c237b27d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(ee9ca1a3102debe1389ada42930d30d73f7557375ecf134f6f4f36407781a54b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" podUID="1852ee04-b190-41b5-8261-d481c237b27d" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.971986 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.991368 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-vl469"] Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.991441 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:42 crc kubenswrapper[4750]: I0214 14:03:42.991788 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.019667 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(beed0bf837f38df97ac6dcffe274d7833263028c4549f30c366eb0c23ae90cb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.019739 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(beed0bf837f38df97ac6dcffe274d7833263028c4549f30c366eb0c23ae90cb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.019759 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(beed0bf837f38df97ac6dcffe274d7833263028c4549f30c366eb0c23ae90cb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.019794 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators(874d7068-1761-42b7-8e65-5ea7669259f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators(874d7068-1761-42b7-8e65-5ea7669259f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(beed0bf837f38df97ac6dcffe274d7833263028c4549f30c366eb0c23ae90cb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" podUID="874d7068-1761-42b7-8e65-5ea7669259f4" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.074245 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(98d8b747c19dd18419d4b90e25d86422993140c57f8fea12a357b85caf2426de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.074311 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(98d8b747c19dd18419d4b90e25d86422993140c57f8fea12a357b85caf2426de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.074329 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(98d8b747c19dd18419d4b90e25d86422993140c57f8fea12a357b85caf2426de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.074370 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators(75d8f617-5e52-472a-922a-88563b49d041)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators(75d8f617-5e52-472a-922a-88563b49d041)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(98d8b747c19dd18419d4b90e25d86422993140c57f8fea12a357b85caf2426de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" podUID="75d8f617-5e52-472a-922a-88563b49d041" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.094367 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(00ecd10e79c46e9048302531bff33ce14aff71db8b7e74d4319eb52a354ccb6e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.094431 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(00ecd10e79c46e9048302531bff33ce14aff71db8b7e74d4319eb52a354ccb6e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.094452 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(00ecd10e79c46e9048302531bff33ce14aff71db8b7e74d4319eb52a354ccb6e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.094501 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators(f776de07-4c75-4295-839f-6e10713be326)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators(f776de07-4c75-4295-839f-6e10713be326)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(00ecd10e79c46e9048302531bff33ce14aff71db8b7e74d4319eb52a354ccb6e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" podUID="f776de07-4c75-4295-839f-6e10713be326" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.099089 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(4bcf03dc80d43bccc23d2572234ebdf0fad0e48bc88443664babd836912420a2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.099202 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(4bcf03dc80d43bccc23d2572234ebdf0fad0e48bc88443664babd836912420a2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.099232 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(4bcf03dc80d43bccc23d2572234ebdf0fad0e48bc88443664babd836912420a2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:43 crc kubenswrapper[4750]: E0214 14:03:43.099288 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-vl469_openshift-operators(8f6b191a-aa81-4827-80eb-5bbbdf54eeba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-vl469_openshift-operators(8f6b191a-aa81-4827-80eb-5bbbdf54eeba)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(4bcf03dc80d43bccc23d2572234ebdf0fad0e48bc88443664babd836912420a2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-vl469" podUID="8f6b191a-aa81-4827-80eb-5bbbdf54eeba" Feb 14 14:03:43 crc kubenswrapper[4750]: I0214 14:03:43.907736 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:43 crc kubenswrapper[4750]: I0214 14:03:43.907993 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:43 crc kubenswrapper[4750]: I0214 14:03:43.950785 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:03:47 crc kubenswrapper[4750]: I0214 14:03:47.741785 4750 scope.go:117] "RemoveContainer" containerID="990256c8f115726952dc644dd7da82c2b60c46ad99f40b0f2d44563cc83e28be" Feb 14 14:03:47 crc kubenswrapper[4750]: E0214 14:03:47.742546 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-n59sl_openshift-multus(7475461f-e0e5-4d5e-91fd-bfe8fb575146)\"" pod="openshift-multus/multus-n59sl" podUID="7475461f-e0e5-4d5e-91fd-bfe8fb575146" Feb 14 14:03:53 crc kubenswrapper[4750]: I0214 14:03:53.740982 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:53 crc kubenswrapper[4750]: I0214 14:03:53.741923 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:53 crc kubenswrapper[4750]: E0214 14:03:53.774223 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(3c661ec9d96d02db8cf24a0a0364cce8552b83df52d5f8acd43f4854c0be1c85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:53 crc kubenswrapper[4750]: E0214 14:03:53.774304 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(3c661ec9d96d02db8cf24a0a0364cce8552b83df52d5f8acd43f4854c0be1c85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:53 crc kubenswrapper[4750]: E0214 14:03:53.774328 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(3c661ec9d96d02db8cf24a0a0364cce8552b83df52d5f8acd43f4854c0be1c85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:03:53 crc kubenswrapper[4750]: E0214 14:03:53.774390 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-vl469_openshift-operators(8f6b191a-aa81-4827-80eb-5bbbdf54eeba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-vl469_openshift-operators(8f6b191a-aa81-4827-80eb-5bbbdf54eeba)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vl469_openshift-operators_8f6b191a-aa81-4827-80eb-5bbbdf54eeba_0(3c661ec9d96d02db8cf24a0a0364cce8552b83df52d5f8acd43f4854c0be1c85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-vl469" podUID="8f6b191a-aa81-4827-80eb-5bbbdf54eeba" Feb 14 14:03:54 crc kubenswrapper[4750]: I0214 14:03:54.741760 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:54 crc kubenswrapper[4750]: I0214 14:03:54.742279 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:54 crc kubenswrapper[4750]: I0214 14:03:54.742563 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:54 crc kubenswrapper[4750]: I0214 14:03:54.741738 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:54 crc kubenswrapper[4750]: I0214 14:03:54.743202 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:54 crc kubenswrapper[4750]: I0214 14:03:54.743703 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.800947 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(38196973e89f9790a3240599dc50bd8e171639f9cefe9e456331a4493cc52406): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.801304 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(38196973e89f9790a3240599dc50bd8e171639f9cefe9e456331a4493cc52406): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.801325 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(38196973e89f9790a3240599dc50bd8e171639f9cefe9e456331a4493cc52406): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.801376 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-xn27s_openshift-operators(1852ee04-b190-41b5-8261-d481c237b27d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-xn27s_openshift-operators(1852ee04-b190-41b5-8261-d481c237b27d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xn27s_openshift-operators_1852ee04-b190-41b5-8261-d481c237b27d_0(38196973e89f9790a3240599dc50bd8e171639f9cefe9e456331a4493cc52406): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" podUID="1852ee04-b190-41b5-8261-d481c237b27d" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.804638 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(77357cfcc0b3aee72bebf0e5316db8c8a2edfd1678a5a8384386759737d63ee8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.804728 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(77357cfcc0b3aee72bebf0e5316db8c8a2edfd1678a5a8384386759737d63ee8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.804800 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(77357cfcc0b3aee72bebf0e5316db8c8a2edfd1678a5a8384386759737d63ee8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.805570 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators(75d8f617-5e52-472a-922a-88563b49d041)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators(75d8f617-5e52-472a-922a-88563b49d041)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_openshift-operators_75d8f617-5e52-472a-922a-88563b49d041_0(77357cfcc0b3aee72bebf0e5316db8c8a2edfd1678a5a8384386759737d63ee8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" podUID="75d8f617-5e52-472a-922a-88563b49d041" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.816025 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(f764835070519bae535f481724384c5e22cc3d6e0825243899451518b6eee63a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.816081 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(f764835070519bae535f481724384c5e22cc3d6e0825243899451518b6eee63a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.816101 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(f764835070519bae535f481724384c5e22cc3d6e0825243899451518b6eee63a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:03:54 crc kubenswrapper[4750]: E0214 14:03:54.816152 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators(874d7068-1761-42b7-8e65-5ea7669259f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators(874d7068-1761-42b7-8e65-5ea7669259f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6hfcq_openshift-operators_874d7068-1761-42b7-8e65-5ea7669259f4_0(f764835070519bae535f481724384c5e22cc3d6e0825243899451518b6eee63a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" podUID="874d7068-1761-42b7-8e65-5ea7669259f4" Feb 14 14:03:57 crc kubenswrapper[4750]: I0214 14:03:57.740948 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:57 crc kubenswrapper[4750]: I0214 14:03:57.741504 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:57 crc kubenswrapper[4750]: E0214 14:03:57.768464 4750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(388c69abdf3e4aeb8dc11475be1b9dd1b54124a98d273be30b9c03fa940d5595): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 14:03:57 crc kubenswrapper[4750]: E0214 14:03:57.768525 4750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(388c69abdf3e4aeb8dc11475be1b9dd1b54124a98d273be30b9c03fa940d5595): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:57 crc kubenswrapper[4750]: E0214 14:03:57.768549 4750 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(388c69abdf3e4aeb8dc11475be1b9dd1b54124a98d273be30b9c03fa940d5595): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:03:57 crc kubenswrapper[4750]: E0214 14:03:57.768596 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators(f776de07-4c75-4295-839f-6e10713be326)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators(f776de07-4c75-4295-839f-6e10713be326)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_openshift-operators_f776de07-4c75-4295-839f-6e10713be326_0(388c69abdf3e4aeb8dc11475be1b9dd1b54124a98d273be30b9c03fa940d5595): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" podUID="f776de07-4c75-4295-839f-6e10713be326" Feb 14 14:04:00 crc kubenswrapper[4750]: I0214 14:04:00.742195 4750 scope.go:117] "RemoveContainer" containerID="990256c8f115726952dc644dd7da82c2b60c46ad99f40b0f2d44563cc83e28be" Feb 14 14:04:01 crc kubenswrapper[4750]: I0214 14:04:01.015534 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n59sl_7475461f-e0e5-4d5e-91fd-bfe8fb575146/kube-multus/2.log" Feb 14 14:04:01 crc kubenswrapper[4750]: I0214 14:04:01.015905 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n59sl" event={"ID":"7475461f-e0e5-4d5e-91fd-bfe8fb575146","Type":"ContainerStarted","Data":"c434b4e9090e7fb4142c1d3c8cc65505c5e4caae7119ce15f12651f184f507ad"} Feb 14 14:04:05 crc kubenswrapper[4750]: I0214 14:04:05.740825 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:04:05 crc kubenswrapper[4750]: I0214 14:04:05.741678 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" Feb 14 14:04:06 crc kubenswrapper[4750]: I0214 14:04:06.221812 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq"] Feb 14 14:04:06 crc kubenswrapper[4750]: W0214 14:04:06.226126 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod874d7068_1761_42b7_8e65_5ea7669259f4.slice/crio-2fdd085e244ec6f042c8255dba280883836f89d84f50426698ff6766a4c829d9 WatchSource:0}: Error finding container 2fdd085e244ec6f042c8255dba280883836f89d84f50426698ff6766a4c829d9: Status 404 returned error can't find the container with id 2fdd085e244ec6f042c8255dba280883836f89d84f50426698ff6766a4c829d9 Feb 14 14:04:06 crc kubenswrapper[4750]: I0214 14:04:06.267686 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2grmc" Feb 14 14:04:06 crc kubenswrapper[4750]: I0214 14:04:06.741159 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:04:06 crc kubenswrapper[4750]: I0214 14:04:06.741359 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:04:06 crc kubenswrapper[4750]: I0214 14:04:06.741819 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:04:06 crc kubenswrapper[4750]: I0214 14:04:06.741854 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:04:06 crc kubenswrapper[4750]: I0214 14:04:06.966998 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-xn27s"] Feb 14 14:04:06 crc kubenswrapper[4750]: W0214 14:04:06.972367 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1852ee04_b190_41b5_8261_d481c237b27d.slice/crio-b7a13acb79a699fa8d453481c73084dffcecc7fd3b24b56a700cb52900246d1b WatchSource:0}: Error finding container b7a13acb79a699fa8d453481c73084dffcecc7fd3b24b56a700cb52900246d1b: Status 404 returned error can't find the container with id b7a13acb79a699fa8d453481c73084dffcecc7fd3b24b56a700cb52900246d1b Feb 14 14:04:07 crc kubenswrapper[4750]: I0214 14:04:07.019557 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-vl469"] Feb 14 14:04:07 crc kubenswrapper[4750]: W0214 14:04:07.029162 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f6b191a_aa81_4827_80eb_5bbbdf54eeba.slice/crio-aa15c49529e1936502fb19ebdafdbfbbbfacb8e6db2f09dabf7d0d1be50622a7 WatchSource:0}: Error finding container aa15c49529e1936502fb19ebdafdbfbbbfacb8e6db2f09dabf7d0d1be50622a7: Status 404 returned error can't find the container with id aa15c49529e1936502fb19ebdafdbfbbbfacb8e6db2f09dabf7d0d1be50622a7 Feb 14 14:04:07 crc kubenswrapper[4750]: I0214 14:04:07.054427 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" event={"ID":"1852ee04-b190-41b5-8261-d481c237b27d","Type":"ContainerStarted","Data":"b7a13acb79a699fa8d453481c73084dffcecc7fd3b24b56a700cb52900246d1b"} Feb 14 14:04:07 crc kubenswrapper[4750]: I0214 14:04:07.055871 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-vl469" event={"ID":"8f6b191a-aa81-4827-80eb-5bbbdf54eeba","Type":"ContainerStarted","Data":"aa15c49529e1936502fb19ebdafdbfbbbfacb8e6db2f09dabf7d0d1be50622a7"} Feb 14 14:04:07 crc kubenswrapper[4750]: I0214 14:04:07.057020 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" event={"ID":"874d7068-1761-42b7-8e65-5ea7669259f4","Type":"ContainerStarted","Data":"2fdd085e244ec6f042c8255dba280883836f89d84f50426698ff6766a4c829d9"} Feb 14 14:04:07 crc kubenswrapper[4750]: I0214 14:04:07.741453 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:04:07 crc kubenswrapper[4750]: I0214 14:04:07.742150 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" Feb 14 14:04:07 crc kubenswrapper[4750]: I0214 14:04:07.972626 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2"] Feb 14 14:04:08 crc kubenswrapper[4750]: I0214 14:04:08.070496 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" event={"ID":"75d8f617-5e52-472a-922a-88563b49d041","Type":"ContainerStarted","Data":"a024ab7a7d873d55433ed5374d6133a28db4b1de10e3d6e930c4cfab8180fdf3"} Feb 14 14:04:09 crc kubenswrapper[4750]: I0214 14:04:09.741270 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:04:09 crc kubenswrapper[4750]: I0214 14:04:09.741816 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.046564 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz"] Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.147077 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" event={"ID":"f776de07-4c75-4295-839f-6e10713be326","Type":"ContainerStarted","Data":"467841a9e0a94ba616f29c3753baa8d1eaa7ea616a92a0680553acee0abf9b73"} Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.148345 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-vl469" event={"ID":"8f6b191a-aa81-4827-80eb-5bbbdf54eeba","Type":"ContainerStarted","Data":"7dec4d6752ed4650933a09cd56d469208a4d9d2d2d1391a1ec722be64f1f3059"} Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.148582 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.150227 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" event={"ID":"874d7068-1761-42b7-8e65-5ea7669259f4","Type":"ContainerStarted","Data":"ba4565f25dbde156d2ca8fcc337be38d9c91e4c6033bdd44d3a46d4ac8809a8c"} Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.152387 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" event={"ID":"75d8f617-5e52-472a-922a-88563b49d041","Type":"ContainerStarted","Data":"9010573f9dcb136acf36dbff6f5aeb3b30f16b859c6c96465f4db874584ac8b6"} Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.154053 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" event={"ID":"1852ee04-b190-41b5-8261-d481c237b27d","Type":"ContainerStarted","Data":"4a0c807e56c9f72c5ef6f904d02f222d45119ba1370aef2182f3ce70e63b9458"} Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.154230 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.156529 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.169813 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-vl469" podStartSLOduration=26.566073027 podStartE2EDuration="35.169795714s" podCreationTimestamp="2026-02-14 14:03:41 +0000 UTC" firstStartedPulling="2026-02-14 14:04:07.032885568 +0000 UTC m=+719.058875049" lastFinishedPulling="2026-02-14 14:04:15.636608255 +0000 UTC m=+727.662597736" observedRunningTime="2026-02-14 14:04:16.165302197 +0000 UTC m=+728.191291678" watchObservedRunningTime="2026-02-14 14:04:16.169795714 +0000 UTC m=+728.195785195" Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.191514 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-45zb2" podStartSLOduration=27.558865482 podStartE2EDuration="35.191498104s" podCreationTimestamp="2026-02-14 14:03:41 +0000 UTC" firstStartedPulling="2026-02-14 14:04:07.983684843 +0000 UTC m=+720.009674324" lastFinishedPulling="2026-02-14 14:04:15.616317465 +0000 UTC m=+727.642306946" observedRunningTime="2026-02-14 14:04:16.188020366 +0000 UTC m=+728.214009887" watchObservedRunningTime="2026-02-14 14:04:16.191498104 +0000 UTC m=+728.217487585" Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.225405 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-xn27s" podStartSLOduration=26.551442736 podStartE2EDuration="35.225390518s" podCreationTimestamp="2026-02-14 14:03:41 +0000 UTC" firstStartedPulling="2026-02-14 14:04:06.974487416 +0000 UTC m=+719.000476907" lastFinishedPulling="2026-02-14 14:04:15.648435208 +0000 UTC m=+727.674424689" observedRunningTime="2026-02-14 14:04:16.224382819 +0000 UTC m=+728.250372300" watchObservedRunningTime="2026-02-14 14:04:16.225390518 +0000 UTC m=+728.251379999" Feb 14 14:04:16 crc kubenswrapper[4750]: I0214 14:04:16.251502 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6hfcq" podStartSLOduration=25.875514782 podStartE2EDuration="35.251473401s" podCreationTimestamp="2026-02-14 14:03:41 +0000 UTC" firstStartedPulling="2026-02-14 14:04:06.228659456 +0000 UTC m=+718.254648937" lastFinishedPulling="2026-02-14 14:04:15.604618075 +0000 UTC m=+727.630607556" observedRunningTime="2026-02-14 14:04:16.245598216 +0000 UTC m=+728.271587697" watchObservedRunningTime="2026-02-14 14:04:16.251473401 +0000 UTC m=+728.277462882" Feb 14 14:04:17 crc kubenswrapper[4750]: I0214 14:04:17.175082 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" event={"ID":"f776de07-4c75-4295-839f-6e10713be326","Type":"ContainerStarted","Data":"78c2bbaeffcf40355a98ebd42964e862828ad6a98b9816032b1318c042fbc613"} Feb 14 14:04:17 crc kubenswrapper[4750]: I0214 14:04:17.196024 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz" podStartSLOduration=36.196003771 podStartE2EDuration="36.196003771s" podCreationTimestamp="2026-02-14 14:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:04:17.191791642 +0000 UTC m=+729.217781133" watchObservedRunningTime="2026-02-14 14:04:17.196003771 +0000 UTC m=+729.221993252" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.913646 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lns8l"] Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.915024 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lns8l" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.918534 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.918637 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.918966 4750 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-r5tff" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.923934 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-gtl8p"] Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.927141 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gtl8p" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.929740 4750 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-trq69" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.933916 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lns8l"] Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.947069 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gtl8p"] Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.953453 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jttq6"] Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.954449 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.959832 4750 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wzb48" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.967938 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jttq6"] Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.996890 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9jhw\" (UniqueName: \"kubernetes.io/projected/5dcd18b1-7ced-4567-a937-01a9c6c8b66f-kube-api-access-f9jhw\") pod \"cert-manager-cainjector-cf98fcc89-lns8l\" (UID: \"5dcd18b1-7ced-4567-a937-01a9c6c8b66f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lns8l" Feb 14 14:04:21 crc kubenswrapper[4750]: I0214 14:04:21.997194 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbprd\" (UniqueName: \"kubernetes.io/projected/1fb05431-9eaa-4243-8a3f-fdc9699e102a-kube-api-access-zbprd\") pod \"cert-manager-858654f9db-gtl8p\" (UID: \"1fb05431-9eaa-4243-8a3f-fdc9699e102a\") " pod="cert-manager/cert-manager-858654f9db-gtl8p" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.058609 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-vl469" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.099079 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9jhw\" (UniqueName: \"kubernetes.io/projected/5dcd18b1-7ced-4567-a937-01a9c6c8b66f-kube-api-access-f9jhw\") pod \"cert-manager-cainjector-cf98fcc89-lns8l\" (UID: \"5dcd18b1-7ced-4567-a937-01a9c6c8b66f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lns8l" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.099243 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbprd\" (UniqueName: \"kubernetes.io/projected/1fb05431-9eaa-4243-8a3f-fdc9699e102a-kube-api-access-zbprd\") pod \"cert-manager-858654f9db-gtl8p\" (UID: \"1fb05431-9eaa-4243-8a3f-fdc9699e102a\") " pod="cert-manager/cert-manager-858654f9db-gtl8p" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.099311 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xnvx\" (UniqueName: \"kubernetes.io/projected/dead2cb0-8f6c-40c2-b4a5-a1eb2a506890-kube-api-access-5xnvx\") pod \"cert-manager-webhook-687f57d79b-jttq6\" (UID: \"dead2cb0-8f6c-40c2-b4a5-a1eb2a506890\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.136369 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9jhw\" (UniqueName: \"kubernetes.io/projected/5dcd18b1-7ced-4567-a937-01a9c6c8b66f-kube-api-access-f9jhw\") pod \"cert-manager-cainjector-cf98fcc89-lns8l\" (UID: \"5dcd18b1-7ced-4567-a937-01a9c6c8b66f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lns8l" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.145992 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbprd\" (UniqueName: \"kubernetes.io/projected/1fb05431-9eaa-4243-8a3f-fdc9699e102a-kube-api-access-zbprd\") pod \"cert-manager-858654f9db-gtl8p\" (UID: \"1fb05431-9eaa-4243-8a3f-fdc9699e102a\") " pod="cert-manager/cert-manager-858654f9db-gtl8p" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.200580 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xnvx\" (UniqueName: \"kubernetes.io/projected/dead2cb0-8f6c-40c2-b4a5-a1eb2a506890-kube-api-access-5xnvx\") pod \"cert-manager-webhook-687f57d79b-jttq6\" (UID: \"dead2cb0-8f6c-40c2-b4a5-a1eb2a506890\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.227916 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xnvx\" (UniqueName: \"kubernetes.io/projected/dead2cb0-8f6c-40c2-b4a5-a1eb2a506890-kube-api-access-5xnvx\") pod \"cert-manager-webhook-687f57d79b-jttq6\" (UID: \"dead2cb0-8f6c-40c2-b4a5-a1eb2a506890\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.240054 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lns8l" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.250651 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gtl8p" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.282431 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.488894 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gtl8p"] Feb 14 14:04:22 crc kubenswrapper[4750]: W0214 14:04:22.495312 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fb05431_9eaa_4243_8a3f_fdc9699e102a.slice/crio-f9cbd62d0b5b5ce320e7985beca4f585d8d3521c44e974e6ed7707e68a9921f4 WatchSource:0}: Error finding container f9cbd62d0b5b5ce320e7985beca4f585d8d3521c44e974e6ed7707e68a9921f4: Status 404 returned error can't find the container with id f9cbd62d0b5b5ce320e7985beca4f585d8d3521c44e974e6ed7707e68a9921f4 Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.664130 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lns8l"] Feb 14 14:04:22 crc kubenswrapper[4750]: I0214 14:04:22.730422 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jttq6"] Feb 14 14:04:23 crc kubenswrapper[4750]: I0214 14:04:23.221281 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gtl8p" event={"ID":"1fb05431-9eaa-4243-8a3f-fdc9699e102a","Type":"ContainerStarted","Data":"f9cbd62d0b5b5ce320e7985beca4f585d8d3521c44e974e6ed7707e68a9921f4"} Feb 14 14:04:23 crc kubenswrapper[4750]: I0214 14:04:23.223402 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lns8l" event={"ID":"5dcd18b1-7ced-4567-a937-01a9c6c8b66f","Type":"ContainerStarted","Data":"f77dc20bfe98296ba17a8094e0697d711614ee10cb57beeab3af0a9abc943d7d"} Feb 14 14:04:23 crc kubenswrapper[4750]: I0214 14:04:23.224512 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" event={"ID":"dead2cb0-8f6c-40c2-b4a5-a1eb2a506890","Type":"ContainerStarted","Data":"f1580b80881e05c7f2cfac3c9c3726b4083a184945236893748065309a466ed6"} Feb 14 14:04:29 crc kubenswrapper[4750]: I0214 14:04:29.279266 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lns8l" event={"ID":"5dcd18b1-7ced-4567-a937-01a9c6c8b66f","Type":"ContainerStarted","Data":"b877b491277eb6012c22e0d9f9b243cff4e2290d48b2639720b03212e9062006"} Feb 14 14:04:29 crc kubenswrapper[4750]: I0214 14:04:29.282350 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" event={"ID":"dead2cb0-8f6c-40c2-b4a5-a1eb2a506890","Type":"ContainerStarted","Data":"2af7480e29cc62f1cf07187f29030d3fc89f433a5c9df8dd2b006c2eb379642f"} Feb 14 14:04:29 crc kubenswrapper[4750]: I0214 14:04:29.282655 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" Feb 14 14:04:29 crc kubenswrapper[4750]: I0214 14:04:29.301972 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lns8l" podStartSLOduration=1.875730897 podStartE2EDuration="8.301953623s" podCreationTimestamp="2026-02-14 14:04:21 +0000 UTC" firstStartedPulling="2026-02-14 14:04:22.667681465 +0000 UTC m=+734.693670946" lastFinishedPulling="2026-02-14 14:04:29.093904191 +0000 UTC m=+741.119893672" observedRunningTime="2026-02-14 14:04:29.297529789 +0000 UTC m=+741.323519270" watchObservedRunningTime="2026-02-14 14:04:29.301953623 +0000 UTC m=+741.327943114" Feb 14 14:04:29 crc kubenswrapper[4750]: I0214 14:04:29.316057 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" podStartSLOduration=2.269622567 podStartE2EDuration="8.316035339s" podCreationTimestamp="2026-02-14 14:04:21 +0000 UTC" firstStartedPulling="2026-02-14 14:04:22.736968694 +0000 UTC m=+734.762958175" lastFinishedPulling="2026-02-14 14:04:28.783381466 +0000 UTC m=+740.809370947" observedRunningTime="2026-02-14 14:04:29.314069774 +0000 UTC m=+741.340059255" watchObservedRunningTime="2026-02-14 14:04:29.316035339 +0000 UTC m=+741.342024830" Feb 14 14:04:30 crc kubenswrapper[4750]: I0214 14:04:30.129429 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:04:30 crc kubenswrapper[4750]: I0214 14:04:30.129552 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:04:34 crc kubenswrapper[4750]: E0214 14:04:34.979426 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading blob sha256:eb22a290856639dea675216487978402e44591b16925ad8af3223fb9c0d50099: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/jetstack/cert-manager-controller:v1.19.2" Feb 14 14:04:34 crc kubenswrapper[4750]: E0214 14:04:34.980782 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-controller,Image:quay.io/jetstack/cert-manager-controller:v1.19.2,Command:[],Args:[--v=2 --cluster-resource-namespace=$(POD_NAMESPACE) --leader-election-namespace=kube-system --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.19.2 --max-concurrent-challenges=60],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},ContainerPort{Name:http-healthz,HostPort:0,ContainerPort:9403,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbprd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 http-healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-858654f9db-gtl8p_cert-manager(1fb05431-9eaa-4243-8a3f-fdc9699e102a): ErrImagePull: copying system image from manifest list: reading blob sha256:eb22a290856639dea675216487978402e44591b16925ad8af3223fb9c0d50099: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 14 14:04:34 crc kubenswrapper[4750]: E0214 14:04:34.982166 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-controller\" with ErrImagePull: \"copying system image from manifest list: reading blob sha256:eb22a290856639dea675216487978402e44591b16925ad8af3223fb9c0d50099: fetching blob: received unexpected HTTP status: 504 Gateway Time-out\"" pod="cert-manager/cert-manager-858654f9db-gtl8p" podUID="1fb05431-9eaa-4243-8a3f-fdc9699e102a" Feb 14 14:04:35 crc kubenswrapper[4750]: E0214 14:04:35.325368 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/jetstack/cert-manager-controller:v1.19.2\\\"\"" pod="cert-manager/cert-manager-858654f9db-gtl8p" podUID="1fb05431-9eaa-4243-8a3f-fdc9699e102a" Feb 14 14:04:37 crc kubenswrapper[4750]: I0214 14:04:37.287080 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-jttq6" Feb 14 14:04:45 crc kubenswrapper[4750]: I0214 14:04:45.920326 4750 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 14:04:49 crc kubenswrapper[4750]: I0214 14:04:49.433214 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gtl8p" event={"ID":"1fb05431-9eaa-4243-8a3f-fdc9699e102a","Type":"ContainerStarted","Data":"1b0e8b6937e8f1f4365979851ce0d59204b500c3a4e9bc3e7d4581e3ec8b0b19"} Feb 14 14:04:49 crc kubenswrapper[4750]: I0214 14:04:49.497516 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-gtl8p" podStartSLOduration=2.494122093 podStartE2EDuration="28.497493482s" podCreationTimestamp="2026-02-14 14:04:21 +0000 UTC" firstStartedPulling="2026-02-14 14:04:22.497583911 +0000 UTC m=+734.523573392" lastFinishedPulling="2026-02-14 14:04:48.50095526 +0000 UTC m=+760.526944781" observedRunningTime="2026-02-14 14:04:49.452547248 +0000 UTC m=+761.478536789" watchObservedRunningTime="2026-02-14 14:04:49.497493482 +0000 UTC m=+761.523482973" Feb 14 14:05:00 crc kubenswrapper[4750]: I0214 14:05:00.129710 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:05:00 crc kubenswrapper[4750]: I0214 14:05:00.130471 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.300143 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp"] Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.302894 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.312843 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp"] Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.319162 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.491769 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.491840 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.491988 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5pm\" (UniqueName: \"kubernetes.io/projected/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-kube-api-access-5g5pm\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.507205 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv"] Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.508321 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.561816 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv"] Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.606337 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g5pm\" (UniqueName: \"kubernetes.io/projected/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-kube-api-access-5g5pm\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.606418 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.606457 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.606865 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.607089 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.636172 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g5pm\" (UniqueName: \"kubernetes.io/projected/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-kube-api-access-5g5pm\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.708224 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.708272 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.708343 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72kb2\" (UniqueName: \"kubernetes.io/projected/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-kube-api-access-72kb2\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.809611 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.809673 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.809752 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72kb2\" (UniqueName: \"kubernetes.io/projected/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-kube-api-access-72kb2\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.810060 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.810196 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.830333 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72kb2\" (UniqueName: \"kubernetes.io/projected/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-kube-api-access-72kb2\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:14 crc kubenswrapper[4750]: I0214 14:05:14.929994 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:15 crc kubenswrapper[4750]: I0214 14:05:15.122934 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:15 crc kubenswrapper[4750]: I0214 14:05:15.145541 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp"] Feb 14 14:05:15 crc kubenswrapper[4750]: W0214 14:05:15.160483 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bd1dff7_6b82_41bd_959a_8bc13f6c5a77.slice/crio-da7cc995d7474960fc0ee17ed7e2e9ca6157e09c5021020165604c8f96c7215f WatchSource:0}: Error finding container da7cc995d7474960fc0ee17ed7e2e9ca6157e09c5021020165604c8f96c7215f: Status 404 returned error can't find the container with id da7cc995d7474960fc0ee17ed7e2e9ca6157e09c5021020165604c8f96c7215f Feb 14 14:05:15 crc kubenswrapper[4750]: I0214 14:05:15.317910 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv"] Feb 14 14:05:15 crc kubenswrapper[4750]: W0214 14:05:15.330796 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8f11fe7_74bb_4283_add4_8ca1fb45a3ae.slice/crio-d56ae1e6265ff4d3c7fe6e721bb28961c3e8722995264c7b2b4ce2cd765161e3 WatchSource:0}: Error finding container d56ae1e6265ff4d3c7fe6e721bb28961c3e8722995264c7b2b4ce2cd765161e3: Status 404 returned error can't find the container with id d56ae1e6265ff4d3c7fe6e721bb28961c3e8722995264c7b2b4ce2cd765161e3 Feb 14 14:05:15 crc kubenswrapper[4750]: I0214 14:05:15.707165 4750 generic.go:334] "Generic (PLEG): container finished" podID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerID="7f5026371b292eae84e5b7580a2cfccc12cb0a3eecdddeaebb2e33e8e73c7d07" exitCode=0 Feb 14 14:05:15 crc kubenswrapper[4750]: I0214 14:05:15.707212 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" event={"ID":"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae","Type":"ContainerDied","Data":"7f5026371b292eae84e5b7580a2cfccc12cb0a3eecdddeaebb2e33e8e73c7d07"} Feb 14 14:05:15 crc kubenswrapper[4750]: I0214 14:05:15.707265 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" event={"ID":"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae","Type":"ContainerStarted","Data":"d56ae1e6265ff4d3c7fe6e721bb28961c3e8722995264c7b2b4ce2cd765161e3"} Feb 14 14:05:15 crc kubenswrapper[4750]: I0214 14:05:15.708559 4750 generic.go:334] "Generic (PLEG): container finished" podID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerID="b635e151ed53eb031e3a2267633c430e37f49f2a39b4e74419f45a1aecbe97c0" exitCode=0 Feb 14 14:05:15 crc kubenswrapper[4750]: I0214 14:05:15.708597 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" event={"ID":"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77","Type":"ContainerDied","Data":"b635e151ed53eb031e3a2267633c430e37f49f2a39b4e74419f45a1aecbe97c0"} Feb 14 14:05:15 crc kubenswrapper[4750]: I0214 14:05:15.708628 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" event={"ID":"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77","Type":"ContainerStarted","Data":"da7cc995d7474960fc0ee17ed7e2e9ca6157e09c5021020165604c8f96c7215f"} Feb 14 14:05:17 crc kubenswrapper[4750]: I0214 14:05:17.729237 4750 generic.go:334] "Generic (PLEG): container finished" podID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerID="08d6828898b5c2f94220c21031ceb12b871e7e9c3fc1a921e39ade221b583eac" exitCode=0 Feb 14 14:05:17 crc kubenswrapper[4750]: I0214 14:05:17.729535 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" event={"ID":"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae","Type":"ContainerDied","Data":"08d6828898b5c2f94220c21031ceb12b871e7e9c3fc1a921e39ade221b583eac"} Feb 14 14:05:17 crc kubenswrapper[4750]: I0214 14:05:17.735365 4750 generic.go:334] "Generic (PLEG): container finished" podID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerID="cf82a993be378a133b62da59861a3d1fb6a2086498480203d5d99769114330d6" exitCode=0 Feb 14 14:05:17 crc kubenswrapper[4750]: I0214 14:05:17.735437 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" event={"ID":"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77","Type":"ContainerDied","Data":"cf82a993be378a133b62da59861a3d1fb6a2086498480203d5d99769114330d6"} Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.027330 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kwp7g"] Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.028954 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.050399 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kwp7g"] Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.161261 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-catalog-content\") pod \"redhat-operators-kwp7g\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.161307 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghhg\" (UniqueName: \"kubernetes.io/projected/5f65cedc-23af-4578-a81d-7647078525c7-kube-api-access-2ghhg\") pod \"redhat-operators-kwp7g\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.161493 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-utilities\") pod \"redhat-operators-kwp7g\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.263326 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-utilities\") pod \"redhat-operators-kwp7g\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.263406 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-catalog-content\") pod \"redhat-operators-kwp7g\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.263437 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghhg\" (UniqueName: \"kubernetes.io/projected/5f65cedc-23af-4578-a81d-7647078525c7-kube-api-access-2ghhg\") pod \"redhat-operators-kwp7g\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.263903 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-utilities\") pod \"redhat-operators-kwp7g\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.263947 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-catalog-content\") pod \"redhat-operators-kwp7g\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.285294 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghhg\" (UniqueName: \"kubernetes.io/projected/5f65cedc-23af-4578-a81d-7647078525c7-kube-api-access-2ghhg\") pod \"redhat-operators-kwp7g\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.355478 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.742655 4750 generic.go:334] "Generic (PLEG): container finished" podID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerID="42bc489dd50c15ed0621e3f244bd931ce0fe6fd31bf28915c59319fc91b03e75" exitCode=0 Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.750591 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" event={"ID":"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77","Type":"ContainerDied","Data":"42bc489dd50c15ed0621e3f244bd931ce0fe6fd31bf28915c59319fc91b03e75"} Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.751978 4750 generic.go:334] "Generic (PLEG): container finished" podID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerID="e0910f22e2b6063d2d4d1cdf43e5c3fe844848085f3b29863aefb456e3fa7e3e" exitCode=0 Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.752032 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" event={"ID":"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae","Type":"ContainerDied","Data":"e0910f22e2b6063d2d4d1cdf43e5c3fe844848085f3b29863aefb456e3fa7e3e"} Feb 14 14:05:18 crc kubenswrapper[4750]: I0214 14:05:18.870221 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kwp7g"] Feb 14 14:05:18 crc kubenswrapper[4750]: W0214 14:05:18.874713 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f65cedc_23af_4578_a81d_7647078525c7.slice/crio-2854fd08edabb63dc99ebc9f0995a2e3cac7eb47b5fc37885cbd5df31a44d4b2 WatchSource:0}: Error finding container 2854fd08edabb63dc99ebc9f0995a2e3cac7eb47b5fc37885cbd5df31a44d4b2: Status 404 returned error can't find the container with id 2854fd08edabb63dc99ebc9f0995a2e3cac7eb47b5fc37885cbd5df31a44d4b2 Feb 14 14:05:19 crc kubenswrapper[4750]: I0214 14:05:19.762409 4750 generic.go:334] "Generic (PLEG): container finished" podID="5f65cedc-23af-4578-a81d-7647078525c7" containerID="b9f2e6d838e38793b8747cabddc0e908735d7c78053bb354bbc2ef87c4ebb4bc" exitCode=0 Feb 14 14:05:19 crc kubenswrapper[4750]: I0214 14:05:19.762464 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwp7g" event={"ID":"5f65cedc-23af-4578-a81d-7647078525c7","Type":"ContainerDied","Data":"b9f2e6d838e38793b8747cabddc0e908735d7c78053bb354bbc2ef87c4ebb4bc"} Feb 14 14:05:19 crc kubenswrapper[4750]: I0214 14:05:19.762935 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwp7g" event={"ID":"5f65cedc-23af-4578-a81d-7647078525c7","Type":"ContainerStarted","Data":"2854fd08edabb63dc99ebc9f0995a2e3cac7eb47b5fc37885cbd5df31a44d4b2"} Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.056539 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.061355 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.190838 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-util\") pod \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.190947 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g5pm\" (UniqueName: \"kubernetes.io/projected/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-kube-api-access-5g5pm\") pod \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.190985 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-util\") pod \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.191079 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72kb2\" (UniqueName: \"kubernetes.io/projected/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-kube-api-access-72kb2\") pod \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.191108 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-bundle\") pod \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\" (UID: \"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77\") " Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.191167 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-bundle\") pod \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\" (UID: \"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae\") " Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.194987 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-bundle" (OuterVolumeSpecName: "bundle") pod "c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" (UID: "c8f11fe7-74bb-4283-add4-8ca1fb45a3ae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.197566 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-bundle" (OuterVolumeSpecName: "bundle") pod "9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" (UID: "9bd1dff7-6b82-41bd-959a-8bc13f6c5a77"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.202048 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-kube-api-access-5g5pm" (OuterVolumeSpecName: "kube-api-access-5g5pm") pod "9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" (UID: "9bd1dff7-6b82-41bd-959a-8bc13f6c5a77"). InnerVolumeSpecName "kube-api-access-5g5pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.203289 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-kube-api-access-72kb2" (OuterVolumeSpecName: "kube-api-access-72kb2") pod "c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" (UID: "c8f11fe7-74bb-4283-add4-8ca1fb45a3ae"). InnerVolumeSpecName "kube-api-access-72kb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.213643 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-util" (OuterVolumeSpecName: "util") pod "9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" (UID: "9bd1dff7-6b82-41bd-959a-8bc13f6c5a77"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.214313 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-util" (OuterVolumeSpecName: "util") pod "c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" (UID: "c8f11fe7-74bb-4283-add4-8ca1fb45a3ae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.292972 4750 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.293008 4750 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-util\") on node \"crc\" DevicePath \"\"" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.293017 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g5pm\" (UniqueName: \"kubernetes.io/projected/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-kube-api-access-5g5pm\") on node \"crc\" DevicePath \"\"" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.293029 4750 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-util\") on node \"crc\" DevicePath \"\"" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.293037 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72kb2\" (UniqueName: \"kubernetes.io/projected/c8f11fe7-74bb-4283-add4-8ca1fb45a3ae-kube-api-access-72kb2\") on node \"crc\" DevicePath \"\"" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.293045 4750 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bd1dff7-6b82-41bd-959a-8bc13f6c5a77-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.771370 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" event={"ID":"9bd1dff7-6b82-41bd-959a-8bc13f6c5a77","Type":"ContainerDied","Data":"da7cc995d7474960fc0ee17ed7e2e9ca6157e09c5021020165604c8f96c7215f"} Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.771415 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da7cc995d7474960fc0ee17ed7e2e9ca6157e09c5021020165604c8f96c7215f" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.771418 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.773033 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwp7g" event={"ID":"5f65cedc-23af-4578-a81d-7647078525c7","Type":"ContainerStarted","Data":"2ceb92863505a9b5e7fd8d33be73d15dd10b409caf6bde1f5795a0871699f628"} Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.777878 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" event={"ID":"c8f11fe7-74bb-4283-add4-8ca1fb45a3ae","Type":"ContainerDied","Data":"d56ae1e6265ff4d3c7fe6e721bb28961c3e8722995264c7b2b4ce2cd765161e3"} Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.777926 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d56ae1e6265ff4d3c7fe6e721bb28961c3e8722995264c7b2b4ce2cd765161e3" Feb 14 14:05:20 crc kubenswrapper[4750]: I0214 14:05:20.777989 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv" Feb 14 14:05:21 crc kubenswrapper[4750]: I0214 14:05:21.787683 4750 generic.go:334] "Generic (PLEG): container finished" podID="5f65cedc-23af-4578-a81d-7647078525c7" containerID="2ceb92863505a9b5e7fd8d33be73d15dd10b409caf6bde1f5795a0871699f628" exitCode=0 Feb 14 14:05:21 crc kubenswrapper[4750]: I0214 14:05:21.787795 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwp7g" event={"ID":"5f65cedc-23af-4578-a81d-7647078525c7","Type":"ContainerDied","Data":"2ceb92863505a9b5e7fd8d33be73d15dd10b409caf6bde1f5795a0871699f628"} Feb 14 14:05:22 crc kubenswrapper[4750]: I0214 14:05:22.797159 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwp7g" event={"ID":"5f65cedc-23af-4578-a81d-7647078525c7","Type":"ContainerStarted","Data":"ac41ab63d543d68c64e301b62454d8cf919638a803c051f0cd8cc4b73501ce11"} Feb 14 14:05:22 crc kubenswrapper[4750]: I0214 14:05:22.828387 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kwp7g" podStartSLOduration=2.431412403 podStartE2EDuration="4.828371426s" podCreationTimestamp="2026-02-14 14:05:18 +0000 UTC" firstStartedPulling="2026-02-14 14:05:19.763836535 +0000 UTC m=+791.789826016" lastFinishedPulling="2026-02-14 14:05:22.160795538 +0000 UTC m=+794.186785039" observedRunningTime="2026-02-14 14:05:22.823222691 +0000 UTC m=+794.849212182" watchObservedRunningTime="2026-02-14 14:05:22.828371426 +0000 UTC m=+794.854360897" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.466591 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-8x65s"] Feb 14 14:05:24 crc kubenswrapper[4750]: E0214 14:05:24.466939 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerName="extract" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.466957 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerName="extract" Feb 14 14:05:24 crc kubenswrapper[4750]: E0214 14:05:24.466981 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerName="util" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.466992 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerName="util" Feb 14 14:05:24 crc kubenswrapper[4750]: E0214 14:05:24.467017 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerName="extract" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.467028 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerName="extract" Feb 14 14:05:24 crc kubenswrapper[4750]: E0214 14:05:24.467046 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerName="util" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.467056 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerName="util" Feb 14 14:05:24 crc kubenswrapper[4750]: E0214 14:05:24.467071 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerName="pull" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.467080 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerName="pull" Feb 14 14:05:24 crc kubenswrapper[4750]: E0214 14:05:24.467098 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerName="pull" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.467108 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerName="pull" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.467306 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f11fe7-74bb-4283-add4-8ca1fb45a3ae" containerName="extract" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.467328 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd1dff7-6b82-41bd-959a-8bc13f6c5a77" containerName="extract" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.467890 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-8x65s" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.469720 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.469904 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.470223 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-mz2gw" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.483989 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-8x65s"] Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.495629 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfsw\" (UniqueName: \"kubernetes.io/projected/f13003c2-701e-4806-abb5-23f0e95cf8c2-kube-api-access-mtfsw\") pod \"cluster-logging-operator-c769fd969-8x65s\" (UID: \"f13003c2-701e-4806-abb5-23f0e95cf8c2\") " pod="openshift-logging/cluster-logging-operator-c769fd969-8x65s" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.597064 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfsw\" (UniqueName: \"kubernetes.io/projected/f13003c2-701e-4806-abb5-23f0e95cf8c2-kube-api-access-mtfsw\") pod \"cluster-logging-operator-c769fd969-8x65s\" (UID: \"f13003c2-701e-4806-abb5-23f0e95cf8c2\") " pod="openshift-logging/cluster-logging-operator-c769fd969-8x65s" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.617795 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfsw\" (UniqueName: \"kubernetes.io/projected/f13003c2-701e-4806-abb5-23f0e95cf8c2-kube-api-access-mtfsw\") pod \"cluster-logging-operator-c769fd969-8x65s\" (UID: \"f13003c2-701e-4806-abb5-23f0e95cf8c2\") " pod="openshift-logging/cluster-logging-operator-c769fd969-8x65s" Feb 14 14:05:24 crc kubenswrapper[4750]: I0214 14:05:24.784756 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-8x65s" Feb 14 14:05:25 crc kubenswrapper[4750]: I0214 14:05:25.246415 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-8x65s"] Feb 14 14:05:25 crc kubenswrapper[4750]: W0214 14:05:25.253190 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf13003c2_701e_4806_abb5_23f0e95cf8c2.slice/crio-9e349a32f15a6144ddc48148f0dcc7e86c247a9b44617c9e10a645c5ebe9a376 WatchSource:0}: Error finding container 9e349a32f15a6144ddc48148f0dcc7e86c247a9b44617c9e10a645c5ebe9a376: Status 404 returned error can't find the container with id 9e349a32f15a6144ddc48148f0dcc7e86c247a9b44617c9e10a645c5ebe9a376 Feb 14 14:05:25 crc kubenswrapper[4750]: I0214 14:05:25.815272 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-8x65s" event={"ID":"f13003c2-701e-4806-abb5-23f0e95cf8c2","Type":"ContainerStarted","Data":"9e349a32f15a6144ddc48148f0dcc7e86c247a9b44617c9e10a645c5ebe9a376"} Feb 14 14:05:28 crc kubenswrapper[4750]: I0214 14:05:28.359624 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:28 crc kubenswrapper[4750]: I0214 14:05:28.359885 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:29 crc kubenswrapper[4750]: I0214 14:05:29.412736 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kwp7g" podUID="5f65cedc-23af-4578-a81d-7647078525c7" containerName="registry-server" probeResult="failure" output=< Feb 14 14:05:29 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:05:29 crc kubenswrapper[4750]: > Feb 14 14:05:30 crc kubenswrapper[4750]: I0214 14:05:30.129676 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:05:30 crc kubenswrapper[4750]: I0214 14:05:30.129768 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:05:30 crc kubenswrapper[4750]: I0214 14:05:30.129840 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:05:30 crc kubenswrapper[4750]: I0214 14:05:30.130694 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fcac30bb0adf08b1da63eb7c690c0a5ff45982d267b4e4d61b22d035f601ad8"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:05:30 crc kubenswrapper[4750]: I0214 14:05:30.130790 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://0fcac30bb0adf08b1da63eb7c690c0a5ff45982d267b4e4d61b22d035f601ad8" gracePeriod=600 Feb 14 14:05:30 crc kubenswrapper[4750]: I0214 14:05:30.862689 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="0fcac30bb0adf08b1da63eb7c690c0a5ff45982d267b4e4d61b22d035f601ad8" exitCode=0 Feb 14 14:05:30 crc kubenswrapper[4750]: I0214 14:05:30.862744 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"0fcac30bb0adf08b1da63eb7c690c0a5ff45982d267b4e4d61b22d035f601ad8"} Feb 14 14:05:30 crc kubenswrapper[4750]: I0214 14:05:30.863001 4750 scope.go:117] "RemoveContainer" containerID="16d8f67a0e99f472e2aad782f2e41524ac5fa5eb8cfb90a1e3ce5626b6571b16" Feb 14 14:05:32 crc kubenswrapper[4750]: I0214 14:05:32.878815 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"4b148be393ce42f2c5c0534a347aaef624d5f007a262f94fbbdd37721a992213"} Feb 14 14:05:32 crc kubenswrapper[4750]: I0214 14:05:32.880250 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-8x65s" event={"ID":"f13003c2-701e-4806-abb5-23f0e95cf8c2","Type":"ContainerStarted","Data":"9f44a13067303ab279677d4a32faa607e8657b77b22160ae52ef3dafd8bc08f3"} Feb 14 14:05:32 crc kubenswrapper[4750]: I0214 14:05:32.914520 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-8x65s" podStartSLOduration=1.8948367130000001 podStartE2EDuration="8.914498359s" podCreationTimestamp="2026-02-14 14:05:24 +0000 UTC" firstStartedPulling="2026-02-14 14:05:25.256337655 +0000 UTC m=+797.282327136" lastFinishedPulling="2026-02-14 14:05:32.275999291 +0000 UTC m=+804.301988782" observedRunningTime="2026-02-14 14:05:32.909321913 +0000 UTC m=+804.935311404" watchObservedRunningTime="2026-02-14 14:05:32.914498359 +0000 UTC m=+804.940487840" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.338976 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4"] Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.340783 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.343389 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.343534 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.343685 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.343788 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.343932 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-8hpkz" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.344220 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.361714 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4"] Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.365669 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnjd\" (UniqueName: \"kubernetes.io/projected/b4c5732e-22b3-490f-a53b-d09c07a0a36f-kube-api-access-psnjd\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.365727 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b4c5732e-22b3-490f-a53b-d09c07a0a36f-manager-config\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.365774 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b4c5732e-22b3-490f-a53b-d09c07a0a36f-apiservice-cert\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.365824 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c5732e-22b3-490f-a53b-d09c07a0a36f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.365890 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4c5732e-22b3-490f-a53b-d09c07a0a36f-webhook-cert\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.466832 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c5732e-22b3-490f-a53b-d09c07a0a36f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.466897 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4c5732e-22b3-490f-a53b-d09c07a0a36f-webhook-cert\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.466938 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psnjd\" (UniqueName: \"kubernetes.io/projected/b4c5732e-22b3-490f-a53b-d09c07a0a36f-kube-api-access-psnjd\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.466978 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b4c5732e-22b3-490f-a53b-d09c07a0a36f-manager-config\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.467052 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b4c5732e-22b3-490f-a53b-d09c07a0a36f-apiservice-cert\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.468094 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b4c5732e-22b3-490f-a53b-d09c07a0a36f-manager-config\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.473671 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b4c5732e-22b3-490f-a53b-d09c07a0a36f-apiservice-cert\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.474415 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c5732e-22b3-490f-a53b-d09c07a0a36f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.480197 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4c5732e-22b3-490f-a53b-d09c07a0a36f-webhook-cert\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.483205 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psnjd\" (UniqueName: \"kubernetes.io/projected/b4c5732e-22b3-490f-a53b-d09c07a0a36f-kube-api-access-psnjd\") pod \"loki-operator-controller-manager-846996f79f-rwhb4\" (UID: \"b4c5732e-22b3-490f-a53b-d09c07a0a36f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.656630 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:35 crc kubenswrapper[4750]: I0214 14:05:35.956863 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4"] Feb 14 14:05:35 crc kubenswrapper[4750]: W0214 14:05:35.964751 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4c5732e_22b3_490f_a53b_d09c07a0a36f.slice/crio-aeb1c5315a5c7ecf54b7df072035643695d9668fdd4aecc5c976ade90a4fa973 WatchSource:0}: Error finding container aeb1c5315a5c7ecf54b7df072035643695d9668fdd4aecc5c976ade90a4fa973: Status 404 returned error can't find the container with id aeb1c5315a5c7ecf54b7df072035643695d9668fdd4aecc5c976ade90a4fa973 Feb 14 14:05:36 crc kubenswrapper[4750]: I0214 14:05:36.909125 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" event={"ID":"b4c5732e-22b3-490f-a53b-d09c07a0a36f","Type":"ContainerStarted","Data":"aeb1c5315a5c7ecf54b7df072035643695d9668fdd4aecc5c976ade90a4fa973"} Feb 14 14:05:38 crc kubenswrapper[4750]: I0214 14:05:38.423514 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:38 crc kubenswrapper[4750]: I0214 14:05:38.467762 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:39 crc kubenswrapper[4750]: I0214 14:05:39.931454 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" event={"ID":"b4c5732e-22b3-490f-a53b-d09c07a0a36f","Type":"ContainerStarted","Data":"de4c2a2b6ffd99562616f0af5b8b8a950fbb3a1996e0a688d32f1a64b7aaa4c0"} Feb 14 14:05:41 crc kubenswrapper[4750]: I0214 14:05:41.620353 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kwp7g"] Feb 14 14:05:41 crc kubenswrapper[4750]: I0214 14:05:41.620927 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kwp7g" podUID="5f65cedc-23af-4578-a81d-7647078525c7" containerName="registry-server" containerID="cri-o://ac41ab63d543d68c64e301b62454d8cf919638a803c051f0cd8cc4b73501ce11" gracePeriod=2 Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.029414 4750 generic.go:334] "Generic (PLEG): container finished" podID="5f65cedc-23af-4578-a81d-7647078525c7" containerID="ac41ab63d543d68c64e301b62454d8cf919638a803c051f0cd8cc4b73501ce11" exitCode=0 Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.029754 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwp7g" event={"ID":"5f65cedc-23af-4578-a81d-7647078525c7","Type":"ContainerDied","Data":"ac41ab63d543d68c64e301b62454d8cf919638a803c051f0cd8cc4b73501ce11"} Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.159625 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.269773 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ghhg\" (UniqueName: \"kubernetes.io/projected/5f65cedc-23af-4578-a81d-7647078525c7-kube-api-access-2ghhg\") pod \"5f65cedc-23af-4578-a81d-7647078525c7\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.269930 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-catalog-content\") pod \"5f65cedc-23af-4578-a81d-7647078525c7\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.269948 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-utilities\") pod \"5f65cedc-23af-4578-a81d-7647078525c7\" (UID: \"5f65cedc-23af-4578-a81d-7647078525c7\") " Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.276517 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-utilities" (OuterVolumeSpecName: "utilities") pod "5f65cedc-23af-4578-a81d-7647078525c7" (UID: "5f65cedc-23af-4578-a81d-7647078525c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.295370 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f65cedc-23af-4578-a81d-7647078525c7-kube-api-access-2ghhg" (OuterVolumeSpecName: "kube-api-access-2ghhg") pod "5f65cedc-23af-4578-a81d-7647078525c7" (UID: "5f65cedc-23af-4578-a81d-7647078525c7"). InnerVolumeSpecName "kube-api-access-2ghhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.373038 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.373085 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ghhg\" (UniqueName: \"kubernetes.io/projected/5f65cedc-23af-4578-a81d-7647078525c7-kube-api-access-2ghhg\") on node \"crc\" DevicePath \"\"" Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.403036 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f65cedc-23af-4578-a81d-7647078525c7" (UID: "5f65cedc-23af-4578-a81d-7647078525c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:05:42 crc kubenswrapper[4750]: I0214 14:05:42.474082 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f65cedc-23af-4578-a81d-7647078525c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:05:43 crc kubenswrapper[4750]: I0214 14:05:43.038599 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwp7g" event={"ID":"5f65cedc-23af-4578-a81d-7647078525c7","Type":"ContainerDied","Data":"2854fd08edabb63dc99ebc9f0995a2e3cac7eb47b5fc37885cbd5df31a44d4b2"} Feb 14 14:05:43 crc kubenswrapper[4750]: I0214 14:05:43.038664 4750 scope.go:117] "RemoveContainer" containerID="ac41ab63d543d68c64e301b62454d8cf919638a803c051f0cd8cc4b73501ce11" Feb 14 14:05:43 crc kubenswrapper[4750]: I0214 14:05:43.038702 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwp7g" Feb 14 14:05:43 crc kubenswrapper[4750]: I0214 14:05:43.068152 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kwp7g"] Feb 14 14:05:43 crc kubenswrapper[4750]: I0214 14:05:43.077838 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kwp7g"] Feb 14 14:05:44 crc kubenswrapper[4750]: I0214 14:05:44.751552 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f65cedc-23af-4578-a81d-7647078525c7" path="/var/lib/kubelet/pods/5f65cedc-23af-4578-a81d-7647078525c7/volumes" Feb 14 14:05:45 crc kubenswrapper[4750]: I0214 14:05:45.205227 4750 scope.go:117] "RemoveContainer" containerID="2ceb92863505a9b5e7fd8d33be73d15dd10b409caf6bde1f5795a0871699f628" Feb 14 14:05:45 crc kubenswrapper[4750]: I0214 14:05:45.865432 4750 scope.go:117] "RemoveContainer" containerID="b9f2e6d838e38793b8747cabddc0e908735d7c78053bb354bbc2ef87c4ebb4bc" Feb 14 14:05:47 crc kubenswrapper[4750]: I0214 14:05:47.074519 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" event={"ID":"b4c5732e-22b3-490f-a53b-d09c07a0a36f","Type":"ContainerStarted","Data":"36024d400f1bdbbaf9d8921f5b08239f6d09e364f3e9e4e9d019b4774f4a65b7"} Feb 14 14:05:47 crc kubenswrapper[4750]: I0214 14:05:47.074969 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:47 crc kubenswrapper[4750]: I0214 14:05:47.084474 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" Feb 14 14:05:47 crc kubenswrapper[4750]: I0214 14:05:47.110274 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-846996f79f-rwhb4" podStartSLOduration=2.146617833 podStartE2EDuration="12.110250203s" podCreationTimestamp="2026-02-14 14:05:35 +0000 UTC" firstStartedPulling="2026-02-14 14:05:35.967842724 +0000 UTC m=+807.993832205" lastFinishedPulling="2026-02-14 14:05:45.931475094 +0000 UTC m=+817.957464575" observedRunningTime="2026-02-14 14:05:47.103489482 +0000 UTC m=+819.129478963" watchObservedRunningTime="2026-02-14 14:05:47.110250203 +0000 UTC m=+819.136239714" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.178029 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 14 14:05:51 crc kubenswrapper[4750]: E0214 14:05:51.178893 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f65cedc-23af-4578-a81d-7647078525c7" containerName="extract-utilities" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.178909 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f65cedc-23af-4578-a81d-7647078525c7" containerName="extract-utilities" Feb 14 14:05:51 crc kubenswrapper[4750]: E0214 14:05:51.178933 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f65cedc-23af-4578-a81d-7647078525c7" containerName="registry-server" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.178943 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f65cedc-23af-4578-a81d-7647078525c7" containerName="registry-server" Feb 14 14:05:51 crc kubenswrapper[4750]: E0214 14:05:51.178957 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f65cedc-23af-4578-a81d-7647078525c7" containerName="extract-content" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.178969 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f65cedc-23af-4578-a81d-7647078525c7" containerName="extract-content" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.179140 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f65cedc-23af-4578-a81d-7647078525c7" containerName="registry-server" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.179648 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.183815 4750 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-5vtw8" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.184519 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.184580 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.198090 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.299912 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56f1aa0d-aa4e-4a61-b952-1b05a6ce252a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56f1aa0d-aa4e-4a61-b952-1b05a6ce252a\") pod \"minio\" (UID: \"29b5dd5f-b995-47ec-a9c1-3246e25a724d\") " pod="minio-dev/minio" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.300023 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp54g\" (UniqueName: \"kubernetes.io/projected/29b5dd5f-b995-47ec-a9c1-3246e25a724d-kube-api-access-lp54g\") pod \"minio\" (UID: \"29b5dd5f-b995-47ec-a9c1-3246e25a724d\") " pod="minio-dev/minio" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.401821 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp54g\" (UniqueName: \"kubernetes.io/projected/29b5dd5f-b995-47ec-a9c1-3246e25a724d-kube-api-access-lp54g\") pod \"minio\" (UID: \"29b5dd5f-b995-47ec-a9c1-3246e25a724d\") " pod="minio-dev/minio" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.402068 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56f1aa0d-aa4e-4a61-b952-1b05a6ce252a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56f1aa0d-aa4e-4a61-b952-1b05a6ce252a\") pod \"minio\" (UID: \"29b5dd5f-b995-47ec-a9c1-3246e25a724d\") " pod="minio-dev/minio" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.407469 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.407547 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56f1aa0d-aa4e-4a61-b952-1b05a6ce252a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56f1aa0d-aa4e-4a61-b952-1b05a6ce252a\") pod \"minio\" (UID: \"29b5dd5f-b995-47ec-a9c1-3246e25a724d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/74616081b3484498d1719f38e7ed262e700022b41c0266a18adde9c86ef91ae3/globalmount\"" pod="minio-dev/minio" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.442681 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp54g\" (UniqueName: \"kubernetes.io/projected/29b5dd5f-b995-47ec-a9c1-3246e25a724d-kube-api-access-lp54g\") pod \"minio\" (UID: \"29b5dd5f-b995-47ec-a9c1-3246e25a724d\") " pod="minio-dev/minio" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.459874 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56f1aa0d-aa4e-4a61-b952-1b05a6ce252a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56f1aa0d-aa4e-4a61-b952-1b05a6ce252a\") pod \"minio\" (UID: \"29b5dd5f-b995-47ec-a9c1-3246e25a724d\") " pod="minio-dev/minio" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.534641 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 14 14:05:51 crc kubenswrapper[4750]: I0214 14:05:51.972631 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 14 14:05:51 crc kubenswrapper[4750]: W0214 14:05:51.979013 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29b5dd5f_b995_47ec_a9c1_3246e25a724d.slice/crio-b0071dfbd0afd510f590eb6d6fd061dc4cbc05e74ad6f98be85a07688b82954b WatchSource:0}: Error finding container b0071dfbd0afd510f590eb6d6fd061dc4cbc05e74ad6f98be85a07688b82954b: Status 404 returned error can't find the container with id b0071dfbd0afd510f590eb6d6fd061dc4cbc05e74ad6f98be85a07688b82954b Feb 14 14:05:52 crc kubenswrapper[4750]: I0214 14:05:52.110972 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"29b5dd5f-b995-47ec-a9c1-3246e25a724d","Type":"ContainerStarted","Data":"b0071dfbd0afd510f590eb6d6fd061dc4cbc05e74ad6f98be85a07688b82954b"} Feb 14 14:05:56 crc kubenswrapper[4750]: I0214 14:05:56.139342 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"29b5dd5f-b995-47ec-a9c1-3246e25a724d","Type":"ContainerStarted","Data":"8641b0b04a5430e30fea64e0263dbc3a82ab980f656a2b676e5ab3fe46b88537"} Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.737354 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=8.733964186 podStartE2EDuration="11.737334868s" podCreationTimestamp="2026-02-14 14:05:48 +0000 UTC" firstStartedPulling="2026-02-14 14:05:51.981517204 +0000 UTC m=+824.007506685" lastFinishedPulling="2026-02-14 14:05:54.984887886 +0000 UTC m=+827.010877367" observedRunningTime="2026-02-14 14:05:56.155562207 +0000 UTC m=+828.181551698" watchObservedRunningTime="2026-02-14 14:05:59.737334868 +0000 UTC m=+831.763324349" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.739502 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87"] Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.740272 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.745153 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.745297 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-jgxw2" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.745557 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.748208 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.751074 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.760750 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87"] Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.818473 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-config\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.818509 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdms\" (UniqueName: \"kubernetes.io/projected/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-kube-api-access-zgdms\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.818543 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.818626 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.818663 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.907896 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr"] Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.908674 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.910262 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.910321 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.913434 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.919824 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxbl\" (UniqueName: \"kubernetes.io/projected/158c19c7-53f4-4964-89df-ee7509251e08-kube-api-access-mfxbl\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.919894 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-config\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.919922 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgdms\" (UniqueName: \"kubernetes.io/projected/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-kube-api-access-zgdms\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.919958 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.919983 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.920004 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.920058 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.920089 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.920157 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.920193 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.920219 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c19c7-53f4-4964-89df-ee7509251e08-config\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.921096 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-config\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.921335 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.931630 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr"] Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.935435 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.945846 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgdms\" (UniqueName: \"kubernetes.io/projected/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-kube-api-access-zgdms\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.956917 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/75cfd9e5-1c5d-4f8c-b736-c7f4d3415033-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-zgn87\" (UID: \"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.990713 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm"] Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.991700 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.994867 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.995054 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 14 14:05:59 crc kubenswrapper[4750]: I0214 14:05:59.999513 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020608 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020645 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020687 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-config\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020717 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020739 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr54h\" (UniqueName: \"kubernetes.io/projected/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-kube-api-access-qr54h\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020797 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020822 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020840 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c19c7-53f4-4964-89df-ee7509251e08-config\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020867 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxbl\" (UniqueName: \"kubernetes.io/projected/158c19c7-53f4-4964-89df-ee7509251e08-kube-api-access-mfxbl\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020884 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.020912 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.021637 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.021899 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c19c7-53f4-4964-89df-ee7509251e08-config\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.023781 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.027430 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.037226 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxbl\" (UniqueName: \"kubernetes.io/projected/158c19c7-53f4-4964-89df-ee7509251e08-kube-api-access-mfxbl\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.039743 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/158c19c7-53f4-4964-89df-ee7509251e08-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-hwkcr\" (UID: \"158c19c7-53f4-4964-89df-ee7509251e08\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.054518 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.103981 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-595d4c559-z8vwl"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.105223 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.109207 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-595d4c559-zsk8f"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.110017 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.121667 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.121707 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-tls-secret\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.121739 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-rbac\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.121755 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.121773 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-lokistack-gateway\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.121796 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-config\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.121855 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-tenants\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.121970 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122008 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9zbh\" (UniqueName: \"kubernetes.io/projected/abdb8ead-5282-4e10-a261-b90509d22bbd-kube-api-access-d9zbh\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122041 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122223 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-tls-secret\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122246 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-lokistack-gateway\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122274 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr54h\" (UniqueName: \"kubernetes.io/projected/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-kube-api-access-qr54h\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122322 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-rbac\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122341 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122361 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-tenants\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122385 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvxjt\" (UniqueName: \"kubernetes.io/projected/11746f0c-702d-4684-97e8-46c8b3f2d75a-kube-api-access-hvxjt\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122440 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122477 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122500 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122532 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.122678 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-config\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.123487 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.127319 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.130032 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.131351 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.131571 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.132034 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.132143 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.143171 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-595d4c559-zsk8f"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.146384 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.146607 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-x2z6h" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.153067 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-595d4c559-z8vwl"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.190939 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr54h\" (UniqueName: \"kubernetes.io/projected/6bb3081d-4136-43e1-a9a9-9d9b5ce10809-kube-api-access-qr54h\") pod \"logging-loki-query-frontend-6d6859c548-j8bzm\" (UID: \"6bb3081d-4136-43e1-a9a9-9d9b5ce10809\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.224108 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-tenants\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.224177 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.224228 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9zbh\" (UniqueName: \"kubernetes.io/projected/abdb8ead-5282-4e10-a261-b90509d22bbd-kube-api-access-d9zbh\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.224253 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-tls-secret\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.224300 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-lokistack-gateway\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: E0214 14:06:00.225648 4750 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 14 14:06:00 crc kubenswrapper[4750]: E0214 14:06:00.225754 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-tls-secret podName:11746f0c-702d-4684-97e8-46c8b3f2d75a nodeName:}" failed. No retries permitted until 2026-02-14 14:06:00.725727493 +0000 UTC m=+832.751716974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-tls-secret") pod "logging-loki-gateway-595d4c559-zsk8f" (UID: "11746f0c-702d-4684-97e8-46c8b3f2d75a") : secret "logging-loki-gateway-http" not found Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.226739 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.226854 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-lokistack-gateway\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.227854 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-rbac\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.230343 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-rbac\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.230516 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.230668 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-tenants\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.230739 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvxjt\" (UniqueName: \"kubernetes.io/projected/11746f0c-702d-4684-97e8-46c8b3f2d75a-kube-api-access-hvxjt\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.230780 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.233213 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.233270 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.233304 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.233338 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-tls-secret\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.233380 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-rbac\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.233395 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.233420 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-lokistack-gateway\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.234423 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-lokistack-gateway\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: E0214 14:06:00.234478 4750 configmap.go:193] Couldn't get configMap openshift-logging/logging-loki-gateway-ca-bundle: configmap "logging-loki-gateway-ca-bundle" not found Feb 14 14:06:00 crc kubenswrapper[4750]: E0214 14:06:00.234516 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-gateway-ca-bundle podName:11746f0c-702d-4684-97e8-46c8b3f2d75a nodeName:}" failed. No retries permitted until 2026-02-14 14:06:00.734501801 +0000 UTC m=+832.760491282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "logging-loki-gateway-ca-bundle" (UniqueName: "kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-gateway-ca-bundle") pod "logging-loki-gateway-595d4c559-zsk8f" (UID: "11746f0c-702d-4684-97e8-46c8b3f2d75a") : configmap "logging-loki-gateway-ca-bundle" not found Feb 14 14:06:00 crc kubenswrapper[4750]: E0214 14:06:00.234564 4750 configmap.go:193] Couldn't get configMap openshift-logging/logging-loki-gateway-ca-bundle: configmap "logging-loki-gateway-ca-bundle" not found Feb 14 14:06:00 crc kubenswrapper[4750]: E0214 14:06:00.234582 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-gateway-ca-bundle podName:abdb8ead-5282-4e10-a261-b90509d22bbd nodeName:}" failed. No retries permitted until 2026-02-14 14:06:00.734576503 +0000 UTC m=+832.760565984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "logging-loki-gateway-ca-bundle" (UniqueName: "kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-gateway-ca-bundle") pod "logging-loki-gateway-595d4c559-z8vwl" (UID: "abdb8ead-5282-4e10-a261-b90509d22bbd") : configmap "logging-loki-gateway-ca-bundle" not found Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.236405 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.237077 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-tenants\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.237921 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.238227 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-rbac\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: E0214 14:06:00.238296 4750 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 14 14:06:00 crc kubenswrapper[4750]: E0214 14:06:00.238331 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-tls-secret podName:abdb8ead-5282-4e10-a261-b90509d22bbd nodeName:}" failed. No retries permitted until 2026-02-14 14:06:00.738319859 +0000 UTC m=+832.764309340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-tls-secret") pod "logging-loki-gateway-595d4c559-z8vwl" (UID: "abdb8ead-5282-4e10-a261-b90509d22bbd") : secret "logging-loki-gateway-http" not found Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.255095 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.257704 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvxjt\" (UniqueName: \"kubernetes.io/projected/11746f0c-702d-4684-97e8-46c8b3f2d75a-kube-api-access-hvxjt\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.261036 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9zbh\" (UniqueName: \"kubernetes.io/projected/abdb8ead-5282-4e10-a261-b90509d22bbd-kube-api-access-d9zbh\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.264713 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-tenants\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.320682 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.708402 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.741696 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-tls-secret\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.742055 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-tls-secret\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.742126 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.742156 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.743049 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abdb8ead-5282-4e10-a261-b90509d22bbd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.743595 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11746f0c-702d-4684-97e8-46c8b3f2d75a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.747256 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/abdb8ead-5282-4e10-a261-b90509d22bbd-tls-secret\") pod \"logging-loki-gateway-595d4c559-z8vwl\" (UID: \"abdb8ead-5282-4e10-a261-b90509d22bbd\") " pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.747397 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/11746f0c-702d-4684-97e8-46c8b3f2d75a-tls-secret\") pod \"logging-loki-gateway-595d4c559-zsk8f\" (UID: \"11746f0c-702d-4684-97e8-46c8b3f2d75a\") " pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.751425 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.774656 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr"] Feb 14 14:06:00 crc kubenswrapper[4750]: W0214 14:06:00.778068 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod158c19c7_53f4_4964_89df_ee7509251e08.slice/crio-f0fd3ae550237980cbe653874f5a78ea081f5f10a23eb8a87274b2810f0c3643 WatchSource:0}: Error finding container f0fd3ae550237980cbe653874f5a78ea081f5f10a23eb8a87274b2810f0c3643: Status 404 returned error can't find the container with id f0fd3ae550237980cbe653874f5a78ea081f5f10a23eb8a87274b2810f0c3643 Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.786506 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.886526 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.887539 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.889947 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.891448 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.908527 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.911768 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.972617 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.975391 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.978824 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.979036 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 14 14:06:00 crc kubenswrapper[4750]: I0214 14:06:00.988757 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.052780 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.053634 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.055223 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0bf91bea-f6db-4b4b-9f03-62489ad84953\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0bf91bea-f6db-4b4b-9f03-62489ad84953\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.055272 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.055299 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.055333 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.055363 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2dec6e90-b01b-49c2-aedd-aea8478e8f77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dec6e90-b01b-49c2-aedd-aea8478e8f77\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.055393 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.055976 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.056194 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.056604 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d23b53-2885-4966-aa62-1e61fd2f2af6-config\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.056629 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zghf\" (UniqueName: \"kubernetes.io/projected/d4d23b53-2885-4966-aa62-1e61fd2f2af6-kube-api-access-9zghf\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.071001 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.157739 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2dec6e90-b01b-49c2-aedd-aea8478e8f77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dec6e90-b01b-49c2-aedd-aea8478e8f77\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.158069 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwfv\" (UniqueName: \"kubernetes.io/projected/7d57e203-6e0c-4079-ba36-ffb3c7e69913-kube-api-access-9pwfv\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.158201 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8aa92127-0413-4d10-a86c-71bda52bb6b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa92127-0413-4d10-a86c-71bda52bb6b1\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.158286 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.158439 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.158547 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zghf\" (UniqueName: \"kubernetes.io/projected/d4d23b53-2885-4966-aa62-1e61fd2f2af6-kube-api-access-9zghf\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.158664 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0bf91bea-f6db-4b4b-9f03-62489ad84953\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0bf91bea-f6db-4b4b-9f03-62489ad84953\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.158749 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.158874 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1857a5e3-01f6-4907-9995-d7646e26f9d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1857a5e3-01f6-4907-9995-d7646e26f9d6\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.159002 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.159103 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.160322 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.160479 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.160584 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d57e203-6e0c-4079-ba36-ffb3c7e69913-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.160674 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.160871 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.160987 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.161235 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d23b53-2885-4966-aa62-1e61fd2f2af6-config\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.161374 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9s5p\" (UniqueName: \"kubernetes.io/projected/bfa5f022-80f6-4ae5-8734-d6b9b9925490-kube-api-access-x9s5p\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.161525 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.161679 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.161795 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa5f022-80f6-4ae5-8734-d6b9b9925490-config\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.162404 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.162574 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.162617 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0bf91bea-f6db-4b4b-9f03-62489ad84953\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0bf91bea-f6db-4b4b-9f03-62489ad84953\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f9dd00b74cca471bb1ee3db4a812d728bbf6728d7af705021d25afd1257881b/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.163061 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d23b53-2885-4966-aa62-1e61fd2f2af6-config\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.165696 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.167288 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.167960 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.167992 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2dec6e90-b01b-49c2-aedd-aea8478e8f77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dec6e90-b01b-49c2-aedd-aea8478e8f77\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/43b4e5434df1c3da57889ea03a791f25a1be90be6a2b7f3566f9d74f252e0435/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.175886 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zghf\" (UniqueName: \"kubernetes.io/projected/d4d23b53-2885-4966-aa62-1e61fd2f2af6-kube-api-access-9zghf\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.182737 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/d4d23b53-2885-4966-aa62-1e61fd2f2af6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.190847 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2dec6e90-b01b-49c2-aedd-aea8478e8f77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dec6e90-b01b-49c2-aedd-aea8478e8f77\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.193434 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0bf91bea-f6db-4b4b-9f03-62489ad84953\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0bf91bea-f6db-4b4b-9f03-62489ad84953\") pod \"logging-loki-ingester-0\" (UID: \"d4d23b53-2885-4966-aa62-1e61fd2f2af6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.223459 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.227431 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" event={"ID":"6bb3081d-4136-43e1-a9a9-9d9b5ce10809","Type":"ContainerStarted","Data":"3771149e5128937c178119e32c07a2649fa817140bbdb18896b2b6d97c5153bb"} Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.229130 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" event={"ID":"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033","Type":"ContainerStarted","Data":"2d9a0c4cfc0cccffdf763391c5a0cc6aede828ea53584d2f4d3050b3affeb7c6"} Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.230062 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" event={"ID":"158c19c7-53f4-4964-89df-ee7509251e08","Type":"ContainerStarted","Data":"f0fd3ae550237980cbe653874f5a78ea081f5f10a23eb8a87274b2810f0c3643"} Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.262960 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa5f022-80f6-4ae5-8734-d6b9b9925490-config\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263023 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwfv\" (UniqueName: \"kubernetes.io/projected/7d57e203-6e0c-4079-ba36-ffb3c7e69913-kube-api-access-9pwfv\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263056 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263083 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8aa92127-0413-4d10-a86c-71bda52bb6b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa92127-0413-4d10-a86c-71bda52bb6b1\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263150 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263175 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263204 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1857a5e3-01f6-4907-9995-d7646e26f9d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1857a5e3-01f6-4907-9995-d7646e26f9d6\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263237 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263276 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263313 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263340 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d57e203-6e0c-4079-ba36-ffb3c7e69913-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263365 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263389 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.263437 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9s5p\" (UniqueName: \"kubernetes.io/projected/bfa5f022-80f6-4ae5-8734-d6b9b9925490-kube-api-access-x9s5p\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.264313 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa5f022-80f6-4ae5-8734-d6b9b9925490-config\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.264493 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.266364 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.267304 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d57e203-6e0c-4079-ba36-ffb3c7e69913-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.268915 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.270894 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.270925 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1857a5e3-01f6-4907-9995-d7646e26f9d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1857a5e3-01f6-4907-9995-d7646e26f9d6\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/537c31a4463284e8cbc523a8627c55ad4a8b1325a8c4588dc2c65a3b18943bbe/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.271096 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.271163 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8aa92127-0413-4d10-a86c-71bda52bb6b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa92127-0413-4d10-a86c-71bda52bb6b1\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/475126974f5b2d5a682dba9509acf9b27d75f190dd2c6214c80308de6e7e66e3/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.271270 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.272491 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.274406 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7d57e203-6e0c-4079-ba36-ffb3c7e69913-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.277401 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.285860 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/bfa5f022-80f6-4ae5-8734-d6b9b9925490-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.294349 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwfv\" (UniqueName: \"kubernetes.io/projected/7d57e203-6e0c-4079-ba36-ffb3c7e69913-kube-api-access-9pwfv\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.298270 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9s5p\" (UniqueName: \"kubernetes.io/projected/bfa5f022-80f6-4ae5-8734-d6b9b9925490-kube-api-access-x9s5p\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.301915 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8aa92127-0413-4d10-a86c-71bda52bb6b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa92127-0413-4d10-a86c-71bda52bb6b1\") pod \"logging-loki-index-gateway-0\" (UID: \"7d57e203-6e0c-4079-ba36-ffb3c7e69913\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.302475 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-595d4c559-z8vwl"] Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.313961 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1857a5e3-01f6-4907-9995-d7646e26f9d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1857a5e3-01f6-4907-9995-d7646e26f9d6\") pod \"logging-loki-compactor-0\" (UID: \"bfa5f022-80f6-4ae5-8734-d6b9b9925490\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.358517 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-595d4c559-zsk8f"] Feb 14 14:06:01 crc kubenswrapper[4750]: W0214 14:06:01.363191 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11746f0c_702d_4684_97e8_46c8b3f2d75a.slice/crio-42a7c1268e4de06b417273660e07c49975a94f32ef263023ab9794a52a89cd2d WatchSource:0}: Error finding container 42a7c1268e4de06b417273660e07c49975a94f32ef263023ab9794a52a89cd2d: Status 404 returned error can't find the container with id 42a7c1268e4de06b417273660e07c49975a94f32ef263023ab9794a52a89cd2d Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.374841 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.456136 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 14:06:01 crc kubenswrapper[4750]: W0214 14:06:01.462790 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4d23b53_2885_4966_aa62_1e61fd2f2af6.slice/crio-12556cda89307bf8a3cf58ed81bf5e2a309be87ed2365e5474f295cbb704f357 WatchSource:0}: Error finding container 12556cda89307bf8a3cf58ed81bf5e2a309be87ed2365e5474f295cbb704f357: Status 404 returned error can't find the container with id 12556cda89307bf8a3cf58ed81bf5e2a309be87ed2365e5474f295cbb704f357 Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.592172 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 14:06:01 crc kubenswrapper[4750]: W0214 14:06:01.597657 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d57e203_6e0c_4079_ba36_ffb3c7e69913.slice/crio-dfe18f9fd794c35f4be18df2396381699e7c792d8372f8c42a055b053c4916ce WatchSource:0}: Error finding container dfe18f9fd794c35f4be18df2396381699e7c792d8372f8c42a055b053c4916ce: Status 404 returned error can't find the container with id dfe18f9fd794c35f4be18df2396381699e7c792d8372f8c42a055b053c4916ce Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.597764 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:01 crc kubenswrapper[4750]: I0214 14:06:01.826911 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 14:06:01 crc kubenswrapper[4750]: W0214 14:06:01.833311 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa5f022_80f6_4ae5_8734_d6b9b9925490.slice/crio-e6af2a549616bc031378081b20bf173d0e52c70d2ecf16d466e36dfeb33401e1 WatchSource:0}: Error finding container e6af2a549616bc031378081b20bf173d0e52c70d2ecf16d466e36dfeb33401e1: Status 404 returned error can't find the container with id e6af2a549616bc031378081b20bf173d0e52c70d2ecf16d466e36dfeb33401e1 Feb 14 14:06:02 crc kubenswrapper[4750]: I0214 14:06:02.238812 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7d57e203-6e0c-4079-ba36-ffb3c7e69913","Type":"ContainerStarted","Data":"dfe18f9fd794c35f4be18df2396381699e7c792d8372f8c42a055b053c4916ce"} Feb 14 14:06:02 crc kubenswrapper[4750]: I0214 14:06:02.240552 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" event={"ID":"11746f0c-702d-4684-97e8-46c8b3f2d75a","Type":"ContainerStarted","Data":"42a7c1268e4de06b417273660e07c49975a94f32ef263023ab9794a52a89cd2d"} Feb 14 14:06:02 crc kubenswrapper[4750]: I0214 14:06:02.241370 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"d4d23b53-2885-4966-aa62-1e61fd2f2af6","Type":"ContainerStarted","Data":"12556cda89307bf8a3cf58ed81bf5e2a309be87ed2365e5474f295cbb704f357"} Feb 14 14:06:02 crc kubenswrapper[4750]: I0214 14:06:02.242290 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" event={"ID":"abdb8ead-5282-4e10-a261-b90509d22bbd","Type":"ContainerStarted","Data":"d4f8c3aead35bd6c7f2d8c6e07764b761f4f006e62839f982d6f8d837c39a6a7"} Feb 14 14:06:02 crc kubenswrapper[4750]: I0214 14:06:02.243100 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"bfa5f022-80f6-4ae5-8734-d6b9b9925490","Type":"ContainerStarted","Data":"e6af2a549616bc031378081b20bf173d0e52c70d2ecf16d466e36dfeb33401e1"} Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.263802 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" event={"ID":"75cfd9e5-1c5d-4f8c-b736-c7f4d3415033","Type":"ContainerStarted","Data":"c9bfc725209a3adf15d9fb777a3de153c6576c4fa5b4ba51d9c5888a14d41aec"} Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.264194 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.265377 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" event={"ID":"11746f0c-702d-4684-97e8-46c8b3f2d75a","Type":"ContainerStarted","Data":"a7ad35dc157443ad97d901ff02e05fce85cf9af220d65ae41578a92a4a4c3f88"} Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.269698 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" event={"ID":"6bb3081d-4136-43e1-a9a9-9d9b5ce10809","Type":"ContainerStarted","Data":"7e27204c4a8c0826a86b5296848478cd118b6c17dec7f2dbf151368aa3ddd23f"} Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.269820 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.279908 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"d4d23b53-2885-4966-aa62-1e61fd2f2af6","Type":"ContainerStarted","Data":"310dfdb64c1f353c84d4e52f9ef0fd131606dda27345376a7926efbba68e4672"} Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.280063 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.283042 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" event={"ID":"abdb8ead-5282-4e10-a261-b90509d22bbd","Type":"ContainerStarted","Data":"fb7c1dc0bd3827b6c677f4c8937ff5bfa4493a3f606a41433c3a005ce37e3bf5"} Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.284272 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"bfa5f022-80f6-4ae5-8734-d6b9b9925490","Type":"ContainerStarted","Data":"26803eaaf80edc3fb6d75c0f77ab82ae5db6ed53391a67156620fa9e9a2bdbd3"} Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.285001 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.286608 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7d57e203-6e0c-4079-ba36-ffb3c7e69913","Type":"ContainerStarted","Data":"ebe06b6fb81b0012b2ba3b6395d27162587e91d3256ba8f0f2853a1c773cce9d"} Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.287179 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.288207 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" event={"ID":"158c19c7-53f4-4964-89df-ee7509251e08","Type":"ContainerStarted","Data":"8f633d4f01d2964fe3b687fb07ebad5258292fe7f5cb800a370ea5a08650ad3a"} Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.288694 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.289255 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" podStartSLOduration=2.1733212379999998 podStartE2EDuration="6.289241318s" podCreationTimestamp="2026-02-14 14:05:59 +0000 UTC" firstStartedPulling="2026-02-14 14:06:00.713797979 +0000 UTC m=+832.739787460" lastFinishedPulling="2026-02-14 14:06:04.829718059 +0000 UTC m=+836.855707540" observedRunningTime="2026-02-14 14:06:05.281274412 +0000 UTC m=+837.307263894" watchObservedRunningTime="2026-02-14 14:06:05.289241318 +0000 UTC m=+837.315230799" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.299706 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" podStartSLOduration=2.409933105 podStartE2EDuration="6.299693613s" podCreationTimestamp="2026-02-14 14:05:59 +0000 UTC" firstStartedPulling="2026-02-14 14:06:00.919289117 +0000 UTC m=+832.945278588" lastFinishedPulling="2026-02-14 14:06:04.809049615 +0000 UTC m=+836.835039096" observedRunningTime="2026-02-14 14:06:05.299189519 +0000 UTC m=+837.325179000" watchObservedRunningTime="2026-02-14 14:06:05.299693613 +0000 UTC m=+837.325683094" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.324639 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.001827317 podStartE2EDuration="6.324618848s" podCreationTimestamp="2026-02-14 14:05:59 +0000 UTC" firstStartedPulling="2026-02-14 14:06:01.46666533 +0000 UTC m=+833.492654831" lastFinishedPulling="2026-02-14 14:06:04.789456881 +0000 UTC m=+836.815446362" observedRunningTime="2026-02-14 14:06:05.318705961 +0000 UTC m=+837.344695442" watchObservedRunningTime="2026-02-14 14:06:05.324618848 +0000 UTC m=+837.350608329" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.347807 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.147307068 podStartE2EDuration="5.347783112s" podCreationTimestamp="2026-02-14 14:06:00 +0000 UTC" firstStartedPulling="2026-02-14 14:06:01.600276046 +0000 UTC m=+833.626265527" lastFinishedPulling="2026-02-14 14:06:04.80075209 +0000 UTC m=+836.826741571" observedRunningTime="2026-02-14 14:06:05.337380008 +0000 UTC m=+837.363369489" watchObservedRunningTime="2026-02-14 14:06:05.347783112 +0000 UTC m=+837.373772593" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.365616 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" podStartSLOduration=2.357585856 podStartE2EDuration="6.365594336s" podCreationTimestamp="2026-02-14 14:05:59 +0000 UTC" firstStartedPulling="2026-02-14 14:06:00.781381329 +0000 UTC m=+832.807370850" lastFinishedPulling="2026-02-14 14:06:04.789389829 +0000 UTC m=+836.815379330" observedRunningTime="2026-02-14 14:06:05.360534843 +0000 UTC m=+837.386524334" watchObservedRunningTime="2026-02-14 14:06:05.365594336 +0000 UTC m=+837.391583807" Feb 14 14:06:05 crc kubenswrapper[4750]: I0214 14:06:05.386066 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.52314422 podStartE2EDuration="6.386047403s" podCreationTimestamp="2026-02-14 14:05:59 +0000 UTC" firstStartedPulling="2026-02-14 14:06:01.8389004 +0000 UTC m=+833.864889891" lastFinishedPulling="2026-02-14 14:06:04.701803553 +0000 UTC m=+836.727793074" observedRunningTime="2026-02-14 14:06:05.384011676 +0000 UTC m=+837.410001157" watchObservedRunningTime="2026-02-14 14:06:05.386047403 +0000 UTC m=+837.412036884" Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.364599 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" event={"ID":"11746f0c-702d-4684-97e8-46c8b3f2d75a","Type":"ContainerStarted","Data":"712bc9dd658be68e21a7b4fffe86aefb9e28db0bf8c937cd94d2f393e34cdfbb"} Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.365495 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.365805 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.368732 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" event={"ID":"abdb8ead-5282-4e10-a261-b90509d22bbd","Type":"ContainerStarted","Data":"4ef5e1a1471a79a77ba2147a2f20da1fe68be7be3f31b9a6c679b2c08e42d53a"} Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.369059 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.379802 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.381416 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.393651 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.412828 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-595d4c559-zsk8f" podStartSLOduration=2.5391572350000002 podStartE2EDuration="11.412796455s" podCreationTimestamp="2026-02-14 14:06:00 +0000 UTC" firstStartedPulling="2026-02-14 14:06:01.36725956 +0000 UTC m=+833.393249051" lastFinishedPulling="2026-02-14 14:06:10.24089878 +0000 UTC m=+842.266888271" observedRunningTime="2026-02-14 14:06:11.401907067 +0000 UTC m=+843.427896588" watchObservedRunningTime="2026-02-14 14:06:11.412796455 +0000 UTC m=+843.438785966" Feb 14 14:06:11 crc kubenswrapper[4750]: I0214 14:06:11.490574 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" podStartSLOduration=2.561424664 podStartE2EDuration="11.490551872s" podCreationTimestamp="2026-02-14 14:06:00 +0000 UTC" firstStartedPulling="2026-02-14 14:06:01.317350519 +0000 UTC m=+833.343340000" lastFinishedPulling="2026-02-14 14:06:10.246477717 +0000 UTC m=+842.272467208" observedRunningTime="2026-02-14 14:06:11.485248543 +0000 UTC m=+843.511238044" watchObservedRunningTime="2026-02-14 14:06:11.490551872 +0000 UTC m=+843.516541363" Feb 14 14:06:12 crc kubenswrapper[4750]: I0214 14:06:12.380778 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:12 crc kubenswrapper[4750]: I0214 14:06:12.396186 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-595d4c559-z8vwl" Feb 14 14:06:20 crc kubenswrapper[4750]: I0214 14:06:20.061310 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zgn87" Feb 14 14:06:20 crc kubenswrapper[4750]: I0214 14:06:20.239163 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-hwkcr" Feb 14 14:06:20 crc kubenswrapper[4750]: I0214 14:06:20.327874 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-j8bzm" Feb 14 14:06:21 crc kubenswrapper[4750]: I0214 14:06:21.233894 4750 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 14 14:06:21 crc kubenswrapper[4750]: I0214 14:06:21.233974 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d4d23b53-2885-4966-aa62-1e61fd2f2af6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 14:06:21 crc kubenswrapper[4750]: I0214 14:06:21.385625 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 14:06:21 crc kubenswrapper[4750]: I0214 14:06:21.608638 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 14 14:06:31 crc kubenswrapper[4750]: I0214 14:06:31.232319 4750 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 14 14:06:31 crc kubenswrapper[4750]: I0214 14:06:31.232853 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d4d23b53-2885-4966-aa62-1e61fd2f2af6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 14:06:41 crc kubenswrapper[4750]: I0214 14:06:41.231662 4750 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 14 14:06:41 crc kubenswrapper[4750]: I0214 14:06:41.232464 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d4d23b53-2885-4966-aa62-1e61fd2f2af6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.677137 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wmm7n"] Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.678960 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.695489 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmm7n"] Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.792725 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-utilities\") pod \"redhat-marketplace-wmm7n\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.792932 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-catalog-content\") pod \"redhat-marketplace-wmm7n\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.793077 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8t8m\" (UniqueName: \"kubernetes.io/projected/fb0ead9f-e82d-4644-8674-b520445e592f-kube-api-access-g8t8m\") pod \"redhat-marketplace-wmm7n\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.894756 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-utilities\") pod \"redhat-marketplace-wmm7n\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.894844 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-catalog-content\") pod \"redhat-marketplace-wmm7n\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.894896 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8t8m\" (UniqueName: \"kubernetes.io/projected/fb0ead9f-e82d-4644-8674-b520445e592f-kube-api-access-g8t8m\") pod \"redhat-marketplace-wmm7n\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.895373 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-utilities\") pod \"redhat-marketplace-wmm7n\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.895554 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-catalog-content\") pod \"redhat-marketplace-wmm7n\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:46 crc kubenswrapper[4750]: I0214 14:06:46.919603 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8t8m\" (UniqueName: \"kubernetes.io/projected/fb0ead9f-e82d-4644-8674-b520445e592f-kube-api-access-g8t8m\") pod \"redhat-marketplace-wmm7n\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:47 crc kubenswrapper[4750]: I0214 14:06:47.016386 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:47 crc kubenswrapper[4750]: I0214 14:06:47.471544 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmm7n"] Feb 14 14:06:47 crc kubenswrapper[4750]: W0214 14:06:47.482377 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb0ead9f_e82d_4644_8674_b520445e592f.slice/crio-1d57e1d964c932c1844fb31cc44d3785964786ac39534bcfe83ffd82c4d87ce6 WatchSource:0}: Error finding container 1d57e1d964c932c1844fb31cc44d3785964786ac39534bcfe83ffd82c4d87ce6: Status 404 returned error can't find the container with id 1d57e1d964c932c1844fb31cc44d3785964786ac39534bcfe83ffd82c4d87ce6 Feb 14 14:06:47 crc kubenswrapper[4750]: I0214 14:06:47.672230 4750 generic.go:334] "Generic (PLEG): container finished" podID="fb0ead9f-e82d-4644-8674-b520445e592f" containerID="fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101" exitCode=0 Feb 14 14:06:47 crc kubenswrapper[4750]: I0214 14:06:47.672290 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmm7n" event={"ID":"fb0ead9f-e82d-4644-8674-b520445e592f","Type":"ContainerDied","Data":"fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101"} Feb 14 14:06:47 crc kubenswrapper[4750]: I0214 14:06:47.672563 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmm7n" event={"ID":"fb0ead9f-e82d-4644-8674-b520445e592f","Type":"ContainerStarted","Data":"1d57e1d964c932c1844fb31cc44d3785964786ac39534bcfe83ffd82c4d87ce6"} Feb 14 14:06:48 crc kubenswrapper[4750]: I0214 14:06:48.679719 4750 generic.go:334] "Generic (PLEG): container finished" podID="fb0ead9f-e82d-4644-8674-b520445e592f" containerID="f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6" exitCode=0 Feb 14 14:06:48 crc kubenswrapper[4750]: I0214 14:06:48.679777 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmm7n" event={"ID":"fb0ead9f-e82d-4644-8674-b520445e592f","Type":"ContainerDied","Data":"f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6"} Feb 14 14:06:49 crc kubenswrapper[4750]: I0214 14:06:49.687374 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmm7n" event={"ID":"fb0ead9f-e82d-4644-8674-b520445e592f","Type":"ContainerStarted","Data":"29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d"} Feb 14 14:06:49 crc kubenswrapper[4750]: I0214 14:06:49.706888 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wmm7n" podStartSLOduration=2.091600665 podStartE2EDuration="3.706868366s" podCreationTimestamp="2026-02-14 14:06:46 +0000 UTC" firstStartedPulling="2026-02-14 14:06:47.674226127 +0000 UTC m=+879.700215598" lastFinishedPulling="2026-02-14 14:06:49.289493778 +0000 UTC m=+881.315483299" observedRunningTime="2026-02-14 14:06:49.705370494 +0000 UTC m=+881.731359995" watchObservedRunningTime="2026-02-14 14:06:49.706868366 +0000 UTC m=+881.732857847" Feb 14 14:06:51 crc kubenswrapper[4750]: I0214 14:06:51.232715 4750 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 14 14:06:51 crc kubenswrapper[4750]: I0214 14:06:51.233040 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="d4d23b53-2885-4966-aa62-1e61fd2f2af6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.096311 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9snx4"] Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.100072 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.121992 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9snx4"] Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.217921 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-catalog-content\") pod \"certified-operators-9snx4\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.218046 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-utilities\") pod \"certified-operators-9snx4\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.218266 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqnwg\" (UniqueName: \"kubernetes.io/projected/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-kube-api-access-rqnwg\") pod \"certified-operators-9snx4\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.319421 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqnwg\" (UniqueName: \"kubernetes.io/projected/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-kube-api-access-rqnwg\") pod \"certified-operators-9snx4\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.319501 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-catalog-content\") pod \"certified-operators-9snx4\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.319547 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-utilities\") pod \"certified-operators-9snx4\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.320022 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-utilities\") pod \"certified-operators-9snx4\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.320060 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-catalog-content\") pod \"certified-operators-9snx4\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.342028 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqnwg\" (UniqueName: \"kubernetes.io/projected/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-kube-api-access-rqnwg\") pod \"certified-operators-9snx4\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.418546 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:06:54 crc kubenswrapper[4750]: I0214 14:06:54.852781 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9snx4"] Feb 14 14:06:55 crc kubenswrapper[4750]: I0214 14:06:55.744305 4750 generic.go:334] "Generic (PLEG): container finished" podID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerID="c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747" exitCode=0 Feb 14 14:06:55 crc kubenswrapper[4750]: I0214 14:06:55.744431 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9snx4" event={"ID":"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5","Type":"ContainerDied","Data":"c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747"} Feb 14 14:06:55 crc kubenswrapper[4750]: I0214 14:06:55.744663 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9snx4" event={"ID":"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5","Type":"ContainerStarted","Data":"c03505f3936dcdf42e2a5f13228e968ec86887218829ce8cb818024850458d83"} Feb 14 14:06:56 crc kubenswrapper[4750]: I0214 14:06:56.754557 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9snx4" event={"ID":"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5","Type":"ContainerStarted","Data":"bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337"} Feb 14 14:06:57 crc kubenswrapper[4750]: I0214 14:06:57.017379 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:57 crc kubenswrapper[4750]: I0214 14:06:57.017437 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:57 crc kubenswrapper[4750]: I0214 14:06:57.073221 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:57 crc kubenswrapper[4750]: I0214 14:06:57.766822 4750 generic.go:334] "Generic (PLEG): container finished" podID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerID="bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337" exitCode=0 Feb 14 14:06:57 crc kubenswrapper[4750]: I0214 14:06:57.766965 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9snx4" event={"ID":"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5","Type":"ContainerDied","Data":"bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337"} Feb 14 14:06:57 crc kubenswrapper[4750]: I0214 14:06:57.835109 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:06:58 crc kubenswrapper[4750]: I0214 14:06:58.788267 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9snx4" event={"ID":"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5","Type":"ContainerStarted","Data":"1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c"} Feb 14 14:06:58 crc kubenswrapper[4750]: I0214 14:06:58.810770 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9snx4" podStartSLOduration=2.404761777 podStartE2EDuration="4.810752113s" podCreationTimestamp="2026-02-14 14:06:54 +0000 UTC" firstStartedPulling="2026-02-14 14:06:55.747592572 +0000 UTC m=+887.773582063" lastFinishedPulling="2026-02-14 14:06:58.153582918 +0000 UTC m=+890.179572399" observedRunningTime="2026-02-14 14:06:58.807672386 +0000 UTC m=+890.833661867" watchObservedRunningTime="2026-02-14 14:06:58.810752113 +0000 UTC m=+890.836741614" Feb 14 14:06:59 crc kubenswrapper[4750]: I0214 14:06:59.468675 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmm7n"] Feb 14 14:06:59 crc kubenswrapper[4750]: I0214 14:06:59.795231 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wmm7n" podUID="fb0ead9f-e82d-4644-8674-b520445e592f" containerName="registry-server" containerID="cri-o://29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d" gracePeriod=2 Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.235047 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.325878 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-catalog-content\") pod \"fb0ead9f-e82d-4644-8674-b520445e592f\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.325950 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8t8m\" (UniqueName: \"kubernetes.io/projected/fb0ead9f-e82d-4644-8674-b520445e592f-kube-api-access-g8t8m\") pod \"fb0ead9f-e82d-4644-8674-b520445e592f\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.326136 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-utilities\") pod \"fb0ead9f-e82d-4644-8674-b520445e592f\" (UID: \"fb0ead9f-e82d-4644-8674-b520445e592f\") " Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.327595 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-utilities" (OuterVolumeSpecName: "utilities") pod "fb0ead9f-e82d-4644-8674-b520445e592f" (UID: "fb0ead9f-e82d-4644-8674-b520445e592f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.333461 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0ead9f-e82d-4644-8674-b520445e592f-kube-api-access-g8t8m" (OuterVolumeSpecName: "kube-api-access-g8t8m") pod "fb0ead9f-e82d-4644-8674-b520445e592f" (UID: "fb0ead9f-e82d-4644-8674-b520445e592f"). InnerVolumeSpecName "kube-api-access-g8t8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.355501 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb0ead9f-e82d-4644-8674-b520445e592f" (UID: "fb0ead9f-e82d-4644-8674-b520445e592f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.428861 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.428902 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0ead9f-e82d-4644-8674-b520445e592f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.428915 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8t8m\" (UniqueName: \"kubernetes.io/projected/fb0ead9f-e82d-4644-8674-b520445e592f-kube-api-access-g8t8m\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.811911 4750 generic.go:334] "Generic (PLEG): container finished" podID="fb0ead9f-e82d-4644-8674-b520445e592f" containerID="29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d" exitCode=0 Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.811953 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmm7n" event={"ID":"fb0ead9f-e82d-4644-8674-b520445e592f","Type":"ContainerDied","Data":"29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d"} Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.811981 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmm7n" event={"ID":"fb0ead9f-e82d-4644-8674-b520445e592f","Type":"ContainerDied","Data":"1d57e1d964c932c1844fb31cc44d3785964786ac39534bcfe83ffd82c4d87ce6"} Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.811998 4750 scope.go:117] "RemoveContainer" containerID="29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.813319 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wmm7n" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.846963 4750 scope.go:117] "RemoveContainer" containerID="f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.848394 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmm7n"] Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.862738 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmm7n"] Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.868587 4750 scope.go:117] "RemoveContainer" containerID="fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.897088 4750 scope.go:117] "RemoveContainer" containerID="29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d" Feb 14 14:07:00 crc kubenswrapper[4750]: E0214 14:07:00.897877 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d\": container with ID starting with 29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d not found: ID does not exist" containerID="29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.897945 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d"} err="failed to get container status \"29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d\": rpc error: code = NotFound desc = could not find container \"29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d\": container with ID starting with 29999d191ea2fcd9c822d635d8303bd2df522af6e36aaf8abb487af72ea53d8d not found: ID does not exist" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.897987 4750 scope.go:117] "RemoveContainer" containerID="f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6" Feb 14 14:07:00 crc kubenswrapper[4750]: E0214 14:07:00.898669 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6\": container with ID starting with f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6 not found: ID does not exist" containerID="f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.898852 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6"} err="failed to get container status \"f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6\": rpc error: code = NotFound desc = could not find container \"f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6\": container with ID starting with f13e5cea82b3dad74556c0870e92a365de64c7615f4e4aa30e44584dd8fa29a6 not found: ID does not exist" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.899002 4750 scope.go:117] "RemoveContainer" containerID="fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101" Feb 14 14:07:00 crc kubenswrapper[4750]: E0214 14:07:00.899705 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101\": container with ID starting with fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101 not found: ID does not exist" containerID="fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101" Feb 14 14:07:00 crc kubenswrapper[4750]: I0214 14:07:00.899752 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101"} err="failed to get container status \"fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101\": rpc error: code = NotFound desc = could not find container \"fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101\": container with ID starting with fe186cee34f39603d9c9c66b3039cc5f2836b885bda95a21ac975059ef54b101 not found: ID does not exist" Feb 14 14:07:01 crc kubenswrapper[4750]: I0214 14:07:01.234643 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 14 14:07:02 crc kubenswrapper[4750]: I0214 14:07:02.781316 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0ead9f-e82d-4644-8674-b520445e592f" path="/var/lib/kubelet/pods/fb0ead9f-e82d-4644-8674-b520445e592f/volumes" Feb 14 14:07:04 crc kubenswrapper[4750]: I0214 14:07:04.420056 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:07:04 crc kubenswrapper[4750]: I0214 14:07:04.420498 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:07:04 crc kubenswrapper[4750]: I0214 14:07:04.498323 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:07:04 crc kubenswrapper[4750]: I0214 14:07:04.909196 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:07:04 crc kubenswrapper[4750]: I0214 14:07:04.967924 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9snx4"] Feb 14 14:07:06 crc kubenswrapper[4750]: I0214 14:07:06.871722 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9snx4" podUID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerName="registry-server" containerID="cri-o://1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c" gracePeriod=2 Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.334822 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.361826 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-utilities\") pod \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.362002 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-catalog-content\") pod \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.362046 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqnwg\" (UniqueName: \"kubernetes.io/projected/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-kube-api-access-rqnwg\") pod \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\" (UID: \"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5\") " Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.366646 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-utilities" (OuterVolumeSpecName: "utilities") pod "cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" (UID: "cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.371251 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-kube-api-access-rqnwg" (OuterVolumeSpecName: "kube-api-access-rqnwg") pod "cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" (UID: "cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5"). InnerVolumeSpecName "kube-api-access-rqnwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.415376 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" (UID: "cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.464652 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.464903 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.464914 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqnwg\" (UniqueName: \"kubernetes.io/projected/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5-kube-api-access-rqnwg\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.886745 4750 generic.go:334] "Generic (PLEG): container finished" podID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerID="1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c" exitCode=0 Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.886815 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9snx4" event={"ID":"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5","Type":"ContainerDied","Data":"1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c"} Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.886860 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9snx4" event={"ID":"cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5","Type":"ContainerDied","Data":"c03505f3936dcdf42e2a5f13228e968ec86887218829ce8cb818024850458d83"} Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.886893 4750 scope.go:117] "RemoveContainer" containerID="1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.887094 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9snx4" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.928798 4750 scope.go:117] "RemoveContainer" containerID="bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.966719 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9snx4"] Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.967759 4750 scope.go:117] "RemoveContainer" containerID="c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747" Feb 14 14:07:07 crc kubenswrapper[4750]: I0214 14:07:07.978262 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9snx4"] Feb 14 14:07:08 crc kubenswrapper[4750]: I0214 14:07:08.008058 4750 scope.go:117] "RemoveContainer" containerID="1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c" Feb 14 14:07:08 crc kubenswrapper[4750]: E0214 14:07:08.008620 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c\": container with ID starting with 1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c not found: ID does not exist" containerID="1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c" Feb 14 14:07:08 crc kubenswrapper[4750]: I0214 14:07:08.008675 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c"} err="failed to get container status \"1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c\": rpc error: code = NotFound desc = could not find container \"1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c\": container with ID starting with 1035182ddace94966a2e9dc780f09e52dc1701636a2605f04208242a5c9fdb2c not found: ID does not exist" Feb 14 14:07:08 crc kubenswrapper[4750]: I0214 14:07:08.008721 4750 scope.go:117] "RemoveContainer" containerID="bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337" Feb 14 14:07:08 crc kubenswrapper[4750]: E0214 14:07:08.009186 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337\": container with ID starting with bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337 not found: ID does not exist" containerID="bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337" Feb 14 14:07:08 crc kubenswrapper[4750]: I0214 14:07:08.009225 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337"} err="failed to get container status \"bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337\": rpc error: code = NotFound desc = could not find container \"bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337\": container with ID starting with bb45510397e9095b9fd7a77787ebd840c07a206b0adb5f59d3ddf6cf643f1337 not found: ID does not exist" Feb 14 14:07:08 crc kubenswrapper[4750]: I0214 14:07:08.009253 4750 scope.go:117] "RemoveContainer" containerID="c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747" Feb 14 14:07:08 crc kubenswrapper[4750]: E0214 14:07:08.009678 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747\": container with ID starting with c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747 not found: ID does not exist" containerID="c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747" Feb 14 14:07:08 crc kubenswrapper[4750]: I0214 14:07:08.009707 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747"} err="failed to get container status \"c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747\": rpc error: code = NotFound desc = could not find container \"c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747\": container with ID starting with c8a04426339b81076a8883c8d8c0bf3372f99fa154d670c180d54abc9e591747 not found: ID does not exist" Feb 14 14:07:08 crc kubenswrapper[4750]: I0214 14:07:08.763324 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" path="/var/lib/kubelet/pods/cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5/volumes" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.704018 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-h2dz8"] Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.704752 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerName="registry-server" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.704766 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerName="registry-server" Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.704779 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0ead9f-e82d-4644-8674-b520445e592f" containerName="extract-utilities" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.704784 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0ead9f-e82d-4644-8674-b520445e592f" containerName="extract-utilities" Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.704794 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0ead9f-e82d-4644-8674-b520445e592f" containerName="extract-content" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.704801 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0ead9f-e82d-4644-8674-b520445e592f" containerName="extract-content" Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.704820 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0ead9f-e82d-4644-8674-b520445e592f" containerName="registry-server" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.704825 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0ead9f-e82d-4644-8674-b520445e592f" containerName="registry-server" Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.704835 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerName="extract-content" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.704842 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerName="extract-content" Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.704855 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerName="extract-utilities" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.704862 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerName="extract-utilities" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.704979 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd778b79-5a9b-41d2-b7f5-1fb0cd0aaba5" containerName="registry-server" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.704991 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0ead9f-e82d-4644-8674-b520445e592f" containerName="registry-server" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.705488 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.716339 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.716483 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.716626 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.716493 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.717192 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-bqg6j" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.728155 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.735515 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-h2dz8"] Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.795167 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-trusted-ca\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.795459 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/db785d20-47a4-44dc-b950-b27c9229e7af-datadir\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.795539 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/db785d20-47a4-44dc-b950-b27c9229e7af-tmp\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.795643 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-sa-token\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.795715 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-entrypoint\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.795782 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.795874 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfww\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-kube-api-access-trfww\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.795961 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config-openshift-service-cacrt\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.796034 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-token\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.796104 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-metrics\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.796193 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-syslog-receiver\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.897818 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-sa-token\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898137 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898153 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-entrypoint\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898200 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trfww\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-kube-api-access-trfww\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898231 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config-openshift-service-cacrt\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898250 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-token\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898268 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-metrics\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898285 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-syslog-receiver\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898314 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-trusted-ca\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898341 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/db785d20-47a4-44dc-b950-b27c9229e7af-datadir\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.898355 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/db785d20-47a4-44dc-b950-b27c9229e7af-tmp\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.899693 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config-openshift-service-cacrt\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.901447 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/db785d20-47a4-44dc-b950-b27c9229e7af-datadir\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.901457 4750 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.901497 4750 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.901532 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-syslog-receiver podName:db785d20-47a4-44dc-b950-b27c9229e7af nodeName:}" failed. No retries permitted until 2026-02-14 14:07:21.401516639 +0000 UTC m=+913.427506210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-syslog-receiver") pod "collector-h2dz8" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af") : secret "collector-syslog-receiver" not found Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.901598 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-metrics podName:db785d20-47a4-44dc-b950-b27c9229e7af nodeName:}" failed. No retries permitted until 2026-02-14 14:07:21.40156511 +0000 UTC m=+913.427554661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-metrics") pod "collector-h2dz8" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af") : secret "collector-metrics" not found Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.901836 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.901846 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-trusted-ca\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.903314 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-entrypoint\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.907813 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/db785d20-47a4-44dc-b950-b27c9229e7af-tmp\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.921273 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-token\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.921776 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-sa-token\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.926721 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trfww\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-kube-api-access-trfww\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:20 crc kubenswrapper[4750]: I0214 14:07:20.941475 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-h2dz8"] Feb 14 14:07:20 crc kubenswrapper[4750]: E0214 14:07:20.942230 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver metrics], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-h2dz8" podUID="db785d20-47a4-44dc-b950-b27c9229e7af" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.003904 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-h2dz8" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.012240 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-h2dz8" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.100196 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-trusted-ca\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.100259 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/db785d20-47a4-44dc-b950-b27c9229e7af-tmp\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.100285 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.100309 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config-openshift-service-cacrt\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.100326 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-sa-token\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.100347 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-entrypoint\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.100365 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trfww\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-kube-api-access-trfww\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.100379 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/db785d20-47a4-44dc-b950-b27c9229e7af-datadir\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.100401 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-token\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.101392 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.101647 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.101672 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.101735 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db785d20-47a4-44dc-b950-b27c9229e7af-datadir" (OuterVolumeSpecName: "datadir") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.101739 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config" (OuterVolumeSpecName: "config") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.104057 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-kube-api-access-trfww" (OuterVolumeSpecName: "kube-api-access-trfww") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "kube-api-access-trfww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.104529 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-token" (OuterVolumeSpecName: "collector-token") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.104663 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-sa-token" (OuterVolumeSpecName: "sa-token") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.105498 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db785d20-47a4-44dc-b950-b27c9229e7af-tmp" (OuterVolumeSpecName: "tmp") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.202649 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.202695 4750 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/db785d20-47a4-44dc-b950-b27c9229e7af-tmp\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.202713 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.202732 4750 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.202749 4750 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.202769 4750 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/db785d20-47a4-44dc-b950-b27c9229e7af-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.202790 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trfww\" (UniqueName: \"kubernetes.io/projected/db785d20-47a4-44dc-b950-b27c9229e7af-kube-api-access-trfww\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.202807 4750 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/db785d20-47a4-44dc-b950-b27c9229e7af-datadir\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.202825 4750 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-token\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.405972 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-metrics\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.406042 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-syslog-receiver\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.411173 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-syslog-receiver\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.414890 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-metrics\") pod \"collector-h2dz8\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " pod="openshift-logging/collector-h2dz8" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.507645 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-metrics\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.507835 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-syslog-receiver\") pod \"db785d20-47a4-44dc-b950-b27c9229e7af\" (UID: \"db785d20-47a4-44dc-b950-b27c9229e7af\") " Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.513158 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.517382 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-metrics" (OuterVolumeSpecName: "metrics") pod "db785d20-47a4-44dc-b950-b27c9229e7af" (UID: "db785d20-47a4-44dc-b950-b27c9229e7af"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.610280 4750 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:21 crc kubenswrapper[4750]: I0214 14:07:21.610421 4750 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/db785d20-47a4-44dc-b950-b27c9229e7af-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.011668 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-h2dz8" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.077045 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-h2dz8"] Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.086976 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-h2dz8"] Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.096304 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-s8ptt"] Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.097839 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.101850 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.101893 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.102074 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.102314 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.102716 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-bqg6j" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.110979 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.112712 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-s8ptt"] Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.119992 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgh4j\" (UniqueName: \"kubernetes.io/projected/a6b3f125-069d-4e80-92bc-3e4c32659e7a-kube-api-access-wgh4j\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120162 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a6b3f125-069d-4e80-92bc-3e4c32659e7a-sa-token\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120205 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a6b3f125-069d-4e80-92bc-3e4c32659e7a-metrics\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120236 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-config-openshift-service-cacrt\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120268 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a6b3f125-069d-4e80-92bc-3e4c32659e7a-collector-token\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120296 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a6b3f125-069d-4e80-92bc-3e4c32659e7a-datadir\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120331 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a6b3f125-069d-4e80-92bc-3e4c32659e7a-tmp\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120383 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-config\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120465 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-trusted-ca\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120531 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a6b3f125-069d-4e80-92bc-3e4c32659e7a-collector-syslog-receiver\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.120634 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-entrypoint\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.222684 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-config\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.222800 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-trusted-ca\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.222871 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a6b3f125-069d-4e80-92bc-3e4c32659e7a-collector-syslog-receiver\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.223000 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-entrypoint\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.223038 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgh4j\" (UniqueName: \"kubernetes.io/projected/a6b3f125-069d-4e80-92bc-3e4c32659e7a-kube-api-access-wgh4j\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.223094 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a6b3f125-069d-4e80-92bc-3e4c32659e7a-sa-token\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.223163 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a6b3f125-069d-4e80-92bc-3e4c32659e7a-metrics\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.223189 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-config-openshift-service-cacrt\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.223253 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a6b3f125-069d-4e80-92bc-3e4c32659e7a-collector-token\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.223281 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a6b3f125-069d-4e80-92bc-3e4c32659e7a-datadir\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.223349 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a6b3f125-069d-4e80-92bc-3e4c32659e7a-tmp\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.224091 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-trusted-ca\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.224351 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a6b3f125-069d-4e80-92bc-3e4c32659e7a-datadir\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.224484 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-config-openshift-service-cacrt\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.224927 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-entrypoint\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.225923 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6b3f125-069d-4e80-92bc-3e4c32659e7a-config\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.227978 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a6b3f125-069d-4e80-92bc-3e4c32659e7a-tmp\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.228490 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a6b3f125-069d-4e80-92bc-3e4c32659e7a-collector-token\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.229644 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a6b3f125-069d-4e80-92bc-3e4c32659e7a-collector-syslog-receiver\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.231307 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a6b3f125-069d-4e80-92bc-3e4c32659e7a-metrics\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.247265 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgh4j\" (UniqueName: \"kubernetes.io/projected/a6b3f125-069d-4e80-92bc-3e4c32659e7a-kube-api-access-wgh4j\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.261908 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a6b3f125-069d-4e80-92bc-3e4c32659e7a-sa-token\") pod \"collector-s8ptt\" (UID: \"a6b3f125-069d-4e80-92bc-3e4c32659e7a\") " pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.423150 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-s8ptt" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.751825 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db785d20-47a4-44dc-b950-b27c9229e7af" path="/var/lib/kubelet/pods/db785d20-47a4-44dc-b950-b27c9229e7af/volumes" Feb 14 14:07:22 crc kubenswrapper[4750]: I0214 14:07:22.899844 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-s8ptt"] Feb 14 14:07:22 crc kubenswrapper[4750]: W0214 14:07:22.911739 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6b3f125_069d_4e80_92bc_3e4c32659e7a.slice/crio-4b08d045025d2ed27edb545d451b150c221204538096a795edb7509e5f93a5f1 WatchSource:0}: Error finding container 4b08d045025d2ed27edb545d451b150c221204538096a795edb7509e5f93a5f1: Status 404 returned error can't find the container with id 4b08d045025d2ed27edb545d451b150c221204538096a795edb7509e5f93a5f1 Feb 14 14:07:23 crc kubenswrapper[4750]: I0214 14:07:23.020845 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-s8ptt" event={"ID":"a6b3f125-069d-4e80-92bc-3e4c32659e7a","Type":"ContainerStarted","Data":"4b08d045025d2ed27edb545d451b150c221204538096a795edb7509e5f93a5f1"} Feb 14 14:07:30 crc kubenswrapper[4750]: I0214 14:07:30.084226 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-s8ptt" event={"ID":"a6b3f125-069d-4e80-92bc-3e4c32659e7a","Type":"ContainerStarted","Data":"6ac76ed641fd78c15796729b1637410550308a8240ac54ca61fa193a59e502d9"} Feb 14 14:07:30 crc kubenswrapper[4750]: I0214 14:07:30.128598 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-s8ptt" podStartSLOduration=2.125188831 podStartE2EDuration="8.128564962s" podCreationTimestamp="2026-02-14 14:07:22 +0000 UTC" firstStartedPulling="2026-02-14 14:07:22.915240336 +0000 UTC m=+914.941229817" lastFinishedPulling="2026-02-14 14:07:28.918616457 +0000 UTC m=+920.944605948" observedRunningTime="2026-02-14 14:07:30.11292803 +0000 UTC m=+922.138917561" watchObservedRunningTime="2026-02-14 14:07:30.128564962 +0000 UTC m=+922.154554483" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.459787 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lcbcr"] Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.462049 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.478092 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcbcr"] Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.587170 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-utilities\") pod \"community-operators-lcbcr\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.587449 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc5zk\" (UniqueName: \"kubernetes.io/projected/6f33270a-7c55-48a4-ae11-17293f92515c-kube-api-access-dc5zk\") pod \"community-operators-lcbcr\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.587643 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-catalog-content\") pod \"community-operators-lcbcr\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.688726 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-catalog-content\") pod \"community-operators-lcbcr\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.688845 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-utilities\") pod \"community-operators-lcbcr\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.688932 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc5zk\" (UniqueName: \"kubernetes.io/projected/6f33270a-7c55-48a4-ae11-17293f92515c-kube-api-access-dc5zk\") pod \"community-operators-lcbcr\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.689432 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-utilities\") pod \"community-operators-lcbcr\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.689432 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-catalog-content\") pod \"community-operators-lcbcr\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.706998 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc5zk\" (UniqueName: \"kubernetes.io/projected/6f33270a-7c55-48a4-ae11-17293f92515c-kube-api-access-dc5zk\") pod \"community-operators-lcbcr\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:57 crc kubenswrapper[4750]: I0214 14:07:57.802794 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:07:58 crc kubenswrapper[4750]: I0214 14:07:58.343856 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcbcr"] Feb 14 14:07:59 crc kubenswrapper[4750]: I0214 14:07:59.334927 4750 generic.go:334] "Generic (PLEG): container finished" podID="6f33270a-7c55-48a4-ae11-17293f92515c" containerID="8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb" exitCode=0 Feb 14 14:07:59 crc kubenswrapper[4750]: I0214 14:07:59.334999 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcbcr" event={"ID":"6f33270a-7c55-48a4-ae11-17293f92515c","Type":"ContainerDied","Data":"8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb"} Feb 14 14:07:59 crc kubenswrapper[4750]: I0214 14:07:59.335298 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcbcr" event={"ID":"6f33270a-7c55-48a4-ae11-17293f92515c","Type":"ContainerStarted","Data":"d3e7792683b5664b9458aa156576b1d312769db21e5a5dcd660d0e8633b6c0a1"} Feb 14 14:08:00 crc kubenswrapper[4750]: I0214 14:08:00.128639 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:08:00 crc kubenswrapper[4750]: I0214 14:08:00.128734 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:08:00 crc kubenswrapper[4750]: I0214 14:08:00.344238 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcbcr" event={"ID":"6f33270a-7c55-48a4-ae11-17293f92515c","Type":"ContainerStarted","Data":"17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25"} Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.352481 4750 generic.go:334] "Generic (PLEG): container finished" podID="6f33270a-7c55-48a4-ae11-17293f92515c" containerID="17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25" exitCode=0 Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.352527 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcbcr" event={"ID":"6f33270a-7c55-48a4-ae11-17293f92515c","Type":"ContainerDied","Data":"17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25"} Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.478205 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk"] Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.479549 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.485214 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.501588 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk"] Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.564238 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.564324 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.564387 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfc7z\" (UniqueName: \"kubernetes.io/projected/f97a5e13-6e27-4821-aadd-826dcebbfd6c-kube-api-access-hfc7z\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.666621 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfc7z\" (UniqueName: \"kubernetes.io/projected/f97a5e13-6e27-4821-aadd-826dcebbfd6c-kube-api-access-hfc7z\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.666776 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.666868 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.667543 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.667747 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.688170 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfc7z\" (UniqueName: \"kubernetes.io/projected/f97a5e13-6e27-4821-aadd-826dcebbfd6c-kube-api-access-hfc7z\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:01 crc kubenswrapper[4750]: I0214 14:08:01.833865 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:02 crc kubenswrapper[4750]: I0214 14:08:02.151077 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk"] Feb 14 14:08:02 crc kubenswrapper[4750]: I0214 14:08:02.358877 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" event={"ID":"f97a5e13-6e27-4821-aadd-826dcebbfd6c","Type":"ContainerStarted","Data":"055cf8d0f6a7f1ace3badd2154c42ed60a7bb890f57af2040a5f2064f9107751"} Feb 14 14:08:02 crc kubenswrapper[4750]: I0214 14:08:02.361345 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcbcr" event={"ID":"6f33270a-7c55-48a4-ae11-17293f92515c","Type":"ContainerStarted","Data":"6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601"} Feb 14 14:08:02 crc kubenswrapper[4750]: I0214 14:08:02.385437 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lcbcr" podStartSLOduration=2.946625318 podStartE2EDuration="5.385417969s" podCreationTimestamp="2026-02-14 14:07:57 +0000 UTC" firstStartedPulling="2026-02-14 14:07:59.336810939 +0000 UTC m=+951.362800420" lastFinishedPulling="2026-02-14 14:08:01.77560358 +0000 UTC m=+953.801593071" observedRunningTime="2026-02-14 14:08:02.37660667 +0000 UTC m=+954.402596151" watchObservedRunningTime="2026-02-14 14:08:02.385417969 +0000 UTC m=+954.411407450" Feb 14 14:08:03 crc kubenswrapper[4750]: I0214 14:08:03.370075 4750 generic.go:334] "Generic (PLEG): container finished" podID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerID="722f4cb9b5a4c1613fba7f692b554b02bc710f4a80dfbd15286493086b8f38d7" exitCode=0 Feb 14 14:08:03 crc kubenswrapper[4750]: I0214 14:08:03.370188 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" event={"ID":"f97a5e13-6e27-4821-aadd-826dcebbfd6c","Type":"ContainerDied","Data":"722f4cb9b5a4c1613fba7f692b554b02bc710f4a80dfbd15286493086b8f38d7"} Feb 14 14:08:05 crc kubenswrapper[4750]: I0214 14:08:05.388803 4750 generic.go:334] "Generic (PLEG): container finished" podID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerID="538cc0263ce371f8b5a4242d9b4d5ac453f375daae1a9585bb62bc32520635da" exitCode=0 Feb 14 14:08:05 crc kubenswrapper[4750]: I0214 14:08:05.388887 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" event={"ID":"f97a5e13-6e27-4821-aadd-826dcebbfd6c","Type":"ContainerDied","Data":"538cc0263ce371f8b5a4242d9b4d5ac453f375daae1a9585bb62bc32520635da"} Feb 14 14:08:06 crc kubenswrapper[4750]: I0214 14:08:06.397206 4750 generic.go:334] "Generic (PLEG): container finished" podID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerID="6e789882dc641b91d7f8598acd88fe6a9afc2bb5b9e1760f9343c2433116428f" exitCode=0 Feb 14 14:08:06 crc kubenswrapper[4750]: I0214 14:08:06.397416 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" event={"ID":"f97a5e13-6e27-4821-aadd-826dcebbfd6c","Type":"ContainerDied","Data":"6e789882dc641b91d7f8598acd88fe6a9afc2bb5b9e1760f9343c2433116428f"} Feb 14 14:08:07 crc kubenswrapper[4750]: I0214 14:08:07.762668 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:07 crc kubenswrapper[4750]: I0214 14:08:07.803331 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:08:07 crc kubenswrapper[4750]: I0214 14:08:07.805236 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:08:07 crc kubenswrapper[4750]: I0214 14:08:07.867081 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:08:07 crc kubenswrapper[4750]: I0214 14:08:07.912843 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-bundle\") pod \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " Feb 14 14:08:07 crc kubenswrapper[4750]: I0214 14:08:07.912952 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-util\") pod \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " Feb 14 14:08:07 crc kubenswrapper[4750]: I0214 14:08:07.913081 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfc7z\" (UniqueName: \"kubernetes.io/projected/f97a5e13-6e27-4821-aadd-826dcebbfd6c-kube-api-access-hfc7z\") pod \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\" (UID: \"f97a5e13-6e27-4821-aadd-826dcebbfd6c\") " Feb 14 14:08:07 crc kubenswrapper[4750]: I0214 14:08:07.913699 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-bundle" (OuterVolumeSpecName: "bundle") pod "f97a5e13-6e27-4821-aadd-826dcebbfd6c" (UID: "f97a5e13-6e27-4821-aadd-826dcebbfd6c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:08:07 crc kubenswrapper[4750]: I0214 14:08:07.917609 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97a5e13-6e27-4821-aadd-826dcebbfd6c-kube-api-access-hfc7z" (OuterVolumeSpecName: "kube-api-access-hfc7z") pod "f97a5e13-6e27-4821-aadd-826dcebbfd6c" (UID: "f97a5e13-6e27-4821-aadd-826dcebbfd6c"). InnerVolumeSpecName "kube-api-access-hfc7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:08:08 crc kubenswrapper[4750]: I0214 14:08:08.014768 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfc7z\" (UniqueName: \"kubernetes.io/projected/f97a5e13-6e27-4821-aadd-826dcebbfd6c-kube-api-access-hfc7z\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:08 crc kubenswrapper[4750]: I0214 14:08:08.014811 4750 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:08 crc kubenswrapper[4750]: I0214 14:08:08.211414 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-util" (OuterVolumeSpecName: "util") pod "f97a5e13-6e27-4821-aadd-826dcebbfd6c" (UID: "f97a5e13-6e27-4821-aadd-826dcebbfd6c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:08:08 crc kubenswrapper[4750]: I0214 14:08:08.218487 4750 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f97a5e13-6e27-4821-aadd-826dcebbfd6c-util\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:08 crc kubenswrapper[4750]: I0214 14:08:08.416311 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" Feb 14 14:08:08 crc kubenswrapper[4750]: I0214 14:08:08.416327 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk" event={"ID":"f97a5e13-6e27-4821-aadd-826dcebbfd6c","Type":"ContainerDied","Data":"055cf8d0f6a7f1ace3badd2154c42ed60a7bb890f57af2040a5f2064f9107751"} Feb 14 14:08:08 crc kubenswrapper[4750]: I0214 14:08:08.416371 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="055cf8d0f6a7f1ace3badd2154c42ed60a7bb890f57af2040a5f2064f9107751" Feb 14 14:08:08 crc kubenswrapper[4750]: I0214 14:08:08.478466 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:08:10 crc kubenswrapper[4750]: I0214 14:08:10.221962 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lcbcr"] Feb 14 14:08:11 crc kubenswrapper[4750]: I0214 14:08:11.440648 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lcbcr" podUID="6f33270a-7c55-48a4-ae11-17293f92515c" containerName="registry-server" containerID="cri-o://6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601" gracePeriod=2 Feb 14 14:08:11 crc kubenswrapper[4750]: I0214 14:08:11.838784 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.028080 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-catalog-content\") pod \"6f33270a-7c55-48a4-ae11-17293f92515c\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.028188 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-utilities\") pod \"6f33270a-7c55-48a4-ae11-17293f92515c\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.028324 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc5zk\" (UniqueName: \"kubernetes.io/projected/6f33270a-7c55-48a4-ae11-17293f92515c-kube-api-access-dc5zk\") pod \"6f33270a-7c55-48a4-ae11-17293f92515c\" (UID: \"6f33270a-7c55-48a4-ae11-17293f92515c\") " Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.029076 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-utilities" (OuterVolumeSpecName: "utilities") pod "6f33270a-7c55-48a4-ae11-17293f92515c" (UID: "6f33270a-7c55-48a4-ae11-17293f92515c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.035350 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f33270a-7c55-48a4-ae11-17293f92515c-kube-api-access-dc5zk" (OuterVolumeSpecName: "kube-api-access-dc5zk") pod "6f33270a-7c55-48a4-ae11-17293f92515c" (UID: "6f33270a-7c55-48a4-ae11-17293f92515c"). InnerVolumeSpecName "kube-api-access-dc5zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.104722 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f33270a-7c55-48a4-ae11-17293f92515c" (UID: "6f33270a-7c55-48a4-ae11-17293f92515c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.130717 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.130760 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33270a-7c55-48a4-ae11-17293f92515c-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.130773 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc5zk\" (UniqueName: \"kubernetes.io/projected/6f33270a-7c55-48a4-ae11-17293f92515c-kube-api-access-dc5zk\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.450358 4750 generic.go:334] "Generic (PLEG): container finished" podID="6f33270a-7c55-48a4-ae11-17293f92515c" containerID="6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601" exitCode=0 Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.450400 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcbcr" event={"ID":"6f33270a-7c55-48a4-ae11-17293f92515c","Type":"ContainerDied","Data":"6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601"} Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.450430 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcbcr" event={"ID":"6f33270a-7c55-48a4-ae11-17293f92515c","Type":"ContainerDied","Data":"d3e7792683b5664b9458aa156576b1d312769db21e5a5dcd660d0e8633b6c0a1"} Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.450438 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcbcr" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.450454 4750 scope.go:117] "RemoveContainer" containerID="6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.473522 4750 scope.go:117] "RemoveContainer" containerID="17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.484749 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lcbcr"] Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.490404 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lcbcr"] Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.502389 4750 scope.go:117] "RemoveContainer" containerID="8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.521920 4750 scope.go:117] "RemoveContainer" containerID="6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601" Feb 14 14:08:12 crc kubenswrapper[4750]: E0214 14:08:12.522341 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601\": container with ID starting with 6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601 not found: ID does not exist" containerID="6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.522392 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601"} err="failed to get container status \"6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601\": rpc error: code = NotFound desc = could not find container \"6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601\": container with ID starting with 6664f1fdb65bc17183a4d41f2d2995a04d7fdeeceecaabe5c159e3775f304601 not found: ID does not exist" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.522422 4750 scope.go:117] "RemoveContainer" containerID="17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25" Feb 14 14:08:12 crc kubenswrapper[4750]: E0214 14:08:12.522646 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25\": container with ID starting with 17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25 not found: ID does not exist" containerID="17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.522668 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25"} err="failed to get container status \"17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25\": rpc error: code = NotFound desc = could not find container \"17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25\": container with ID starting with 17090c712e4be3734dba9ed8d77136203cc411ee3c4ff12f999a9b449273fc25 not found: ID does not exist" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.522686 4750 scope.go:117] "RemoveContainer" containerID="8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb" Feb 14 14:08:12 crc kubenswrapper[4750]: E0214 14:08:12.522861 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb\": container with ID starting with 8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb not found: ID does not exist" containerID="8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.522883 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb"} err="failed to get container status \"8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb\": rpc error: code = NotFound desc = could not find container \"8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb\": container with ID starting with 8aa0befbbcbdee01cecb8ece34ea79fa270c7d400247a9888513f639711524cb not found: ID does not exist" Feb 14 14:08:12 crc kubenswrapper[4750]: I0214 14:08:12.751473 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f33270a-7c55-48a4-ae11-17293f92515c" path="/var/lib/kubelet/pods/6f33270a-7c55-48a4-ae11-17293f92515c/volumes" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.283558 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-zrfxn"] Feb 14 14:08:13 crc kubenswrapper[4750]: E0214 14:08:13.284222 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f33270a-7c55-48a4-ae11-17293f92515c" containerName="registry-server" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.284246 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f33270a-7c55-48a4-ae11-17293f92515c" containerName="registry-server" Feb 14 14:08:13 crc kubenswrapper[4750]: E0214 14:08:13.284267 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerName="extract" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.284275 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerName="extract" Feb 14 14:08:13 crc kubenswrapper[4750]: E0214 14:08:13.284296 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f33270a-7c55-48a4-ae11-17293f92515c" containerName="extract-content" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.284309 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f33270a-7c55-48a4-ae11-17293f92515c" containerName="extract-content" Feb 14 14:08:13 crc kubenswrapper[4750]: E0214 14:08:13.284325 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f33270a-7c55-48a4-ae11-17293f92515c" containerName="extract-utilities" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.284333 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f33270a-7c55-48a4-ae11-17293f92515c" containerName="extract-utilities" Feb 14 14:08:13 crc kubenswrapper[4750]: E0214 14:08:13.284347 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerName="util" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.284356 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerName="util" Feb 14 14:08:13 crc kubenswrapper[4750]: E0214 14:08:13.284375 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerName="pull" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.284384 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerName="pull" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.284530 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f33270a-7c55-48a4-ae11-17293f92515c" containerName="registry-server" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.284550 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97a5e13-6e27-4821-aadd-826dcebbfd6c" containerName="extract" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.285277 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-zrfxn" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.288074 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.288128 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xpqzm" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.291198 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.294070 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-zrfxn"] Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.452718 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrlgf\" (UniqueName: \"kubernetes.io/projected/32ffd70f-c819-435f-bb5f-a3a705e4052e-kube-api-access-vrlgf\") pod \"nmstate-operator-694c9596b7-zrfxn\" (UID: \"32ffd70f-c819-435f-bb5f-a3a705e4052e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-zrfxn" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.553688 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrlgf\" (UniqueName: \"kubernetes.io/projected/32ffd70f-c819-435f-bb5f-a3a705e4052e-kube-api-access-vrlgf\") pod \"nmstate-operator-694c9596b7-zrfxn\" (UID: \"32ffd70f-c819-435f-bb5f-a3a705e4052e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-zrfxn" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.571428 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrlgf\" (UniqueName: \"kubernetes.io/projected/32ffd70f-c819-435f-bb5f-a3a705e4052e-kube-api-access-vrlgf\") pod \"nmstate-operator-694c9596b7-zrfxn\" (UID: \"32ffd70f-c819-435f-bb5f-a3a705e4052e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-zrfxn" Feb 14 14:08:13 crc kubenswrapper[4750]: I0214 14:08:13.601770 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-zrfxn" Feb 14 14:08:14 crc kubenswrapper[4750]: I0214 14:08:14.069875 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-zrfxn"] Feb 14 14:08:14 crc kubenswrapper[4750]: W0214 14:08:14.072768 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ffd70f_c819_435f_bb5f_a3a705e4052e.slice/crio-e41b7bb7cde91dd7a2f2e45ba0ad1eaf53dab8b2273000298511ae8b84808c03 WatchSource:0}: Error finding container e41b7bb7cde91dd7a2f2e45ba0ad1eaf53dab8b2273000298511ae8b84808c03: Status 404 returned error can't find the container with id e41b7bb7cde91dd7a2f2e45ba0ad1eaf53dab8b2273000298511ae8b84808c03 Feb 14 14:08:14 crc kubenswrapper[4750]: I0214 14:08:14.478500 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-zrfxn" event={"ID":"32ffd70f-c819-435f-bb5f-a3a705e4052e","Type":"ContainerStarted","Data":"e41b7bb7cde91dd7a2f2e45ba0ad1eaf53dab8b2273000298511ae8b84808c03"} Feb 14 14:08:17 crc kubenswrapper[4750]: I0214 14:08:17.502073 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-zrfxn" event={"ID":"32ffd70f-c819-435f-bb5f-a3a705e4052e","Type":"ContainerStarted","Data":"8e13cc84301923f4bee7caa377a1d156a4752edfa704f8a03df64b743f393652"} Feb 14 14:08:17 crc kubenswrapper[4750]: I0214 14:08:17.533328 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-zrfxn" podStartSLOduration=2.303223301 podStartE2EDuration="4.533308433s" podCreationTimestamp="2026-02-14 14:08:13 +0000 UTC" firstStartedPulling="2026-02-14 14:08:14.077989186 +0000 UTC m=+966.103978677" lastFinishedPulling="2026-02-14 14:08:16.308074328 +0000 UTC m=+968.334063809" observedRunningTime="2026-02-14 14:08:17.529739413 +0000 UTC m=+969.555728934" watchObservedRunningTime="2026-02-14 14:08:17.533308433 +0000 UTC m=+969.559297924" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.394257 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m"] Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.395768 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.398403 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jg6sl" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.409500 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m"] Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.428959 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6"] Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.430529 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.432968 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.466536 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7fb7g"] Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.468074 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.475569 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6"] Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.499926 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rk9x\" (UniqueName: \"kubernetes.io/projected/85849127-24fe-4e0b-9c43-c0d80d007c66-kube-api-access-2rk9x\") pod \"nmstate-metrics-58c85c668d-v5j7m\" (UID: \"85849127-24fe-4e0b-9c43-c0d80d007c66\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.609085 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44ff\" (UniqueName: \"kubernetes.io/projected/a1ca23af-ddcc-4041-8e76-7220d4e32212-kube-api-access-q44ff\") pod \"nmstate-webhook-866bcb46dc-nsxh6\" (UID: \"a1ca23af-ddcc-4041-8e76-7220d4e32212\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.609161 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d67741a3-cda0-41c4-ad20-dac649d22a2d-dbus-socket\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.609235 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d67741a3-cda0-41c4-ad20-dac649d22a2d-ovs-socket\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.609281 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rk9x\" (UniqueName: \"kubernetes.io/projected/85849127-24fe-4e0b-9c43-c0d80d007c66-kube-api-access-2rk9x\") pod \"nmstate-metrics-58c85c668d-v5j7m\" (UID: \"85849127-24fe-4e0b-9c43-c0d80d007c66\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.609298 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a1ca23af-ddcc-4041-8e76-7220d4e32212-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nsxh6\" (UID: \"a1ca23af-ddcc-4041-8e76-7220d4e32212\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.609318 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d67741a3-cda0-41c4-ad20-dac649d22a2d-nmstate-lock\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.609338 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6smft\" (UniqueName: \"kubernetes.io/projected/d67741a3-cda0-41c4-ad20-dac649d22a2d-kube-api-access-6smft\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.611472 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5"] Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.612551 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.618828 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.618825 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.619259 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-w59fw" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.631998 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5"] Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.653269 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rk9x\" (UniqueName: \"kubernetes.io/projected/85849127-24fe-4e0b-9c43-c0d80d007c66-kube-api-access-2rk9x\") pod \"nmstate-metrics-58c85c668d-v5j7m\" (UID: \"85849127-24fe-4e0b-9c43-c0d80d007c66\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711021 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8ql4\" (UniqueName: \"kubernetes.io/projected/8f83c9d2-f263-4114-b375-f18f32d91231-kube-api-access-t8ql4\") pod \"nmstate-console-plugin-5c78fc5d65-vwcr5\" (UID: \"8f83c9d2-f263-4114-b375-f18f32d91231\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711102 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f83c9d2-f263-4114-b375-f18f32d91231-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-vwcr5\" (UID: \"8f83c9d2-f263-4114-b375-f18f32d91231\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711161 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d67741a3-cda0-41c4-ad20-dac649d22a2d-ovs-socket\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711226 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a1ca23af-ddcc-4041-8e76-7220d4e32212-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nsxh6\" (UID: \"a1ca23af-ddcc-4041-8e76-7220d4e32212\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711245 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f83c9d2-f263-4114-b375-f18f32d91231-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-vwcr5\" (UID: \"8f83c9d2-f263-4114-b375-f18f32d91231\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711265 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d67741a3-cda0-41c4-ad20-dac649d22a2d-nmstate-lock\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711288 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6smft\" (UniqueName: \"kubernetes.io/projected/d67741a3-cda0-41c4-ad20-dac649d22a2d-kube-api-access-6smft\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711321 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d67741a3-cda0-41c4-ad20-dac649d22a2d-ovs-socket\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711399 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d67741a3-cda0-41c4-ad20-dac649d22a2d-nmstate-lock\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711453 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q44ff\" (UniqueName: \"kubernetes.io/projected/a1ca23af-ddcc-4041-8e76-7220d4e32212-kube-api-access-q44ff\") pod \"nmstate-webhook-866bcb46dc-nsxh6\" (UID: \"a1ca23af-ddcc-4041-8e76-7220d4e32212\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711525 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d67741a3-cda0-41c4-ad20-dac649d22a2d-dbus-socket\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.711879 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d67741a3-cda0-41c4-ad20-dac649d22a2d-dbus-socket\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.712559 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.714978 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a1ca23af-ddcc-4041-8e76-7220d4e32212-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nsxh6\" (UID: \"a1ca23af-ddcc-4041-8e76-7220d4e32212\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.730247 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6smft\" (UniqueName: \"kubernetes.io/projected/d67741a3-cda0-41c4-ad20-dac649d22a2d-kube-api-access-6smft\") pod \"nmstate-handler-7fb7g\" (UID: \"d67741a3-cda0-41c4-ad20-dac649d22a2d\") " pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.749504 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q44ff\" (UniqueName: \"kubernetes.io/projected/a1ca23af-ddcc-4041-8e76-7220d4e32212-kube-api-access-q44ff\") pod \"nmstate-webhook-866bcb46dc-nsxh6\" (UID: \"a1ca23af-ddcc-4041-8e76-7220d4e32212\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.789260 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.791997 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f459dfbdf-9lrz2"] Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.792960 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.816455 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f83c9d2-f263-4114-b375-f18f32d91231-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-vwcr5\" (UID: \"8f83c9d2-f263-4114-b375-f18f32d91231\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.816536 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8ql4\" (UniqueName: \"kubernetes.io/projected/8f83c9d2-f263-4114-b375-f18f32d91231-kube-api-access-t8ql4\") pod \"nmstate-console-plugin-5c78fc5d65-vwcr5\" (UID: \"8f83c9d2-f263-4114-b375-f18f32d91231\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.816588 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f83c9d2-f263-4114-b375-f18f32d91231-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-vwcr5\" (UID: \"8f83c9d2-f263-4114-b375-f18f32d91231\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.817453 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f83c9d2-f263-4114-b375-f18f32d91231-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-vwcr5\" (UID: \"8f83c9d2-f263-4114-b375-f18f32d91231\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.819980 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f83c9d2-f263-4114-b375-f18f32d91231-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-vwcr5\" (UID: \"8f83c9d2-f263-4114-b375-f18f32d91231\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.825422 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f459dfbdf-9lrz2"] Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.896576 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8ql4\" (UniqueName: \"kubernetes.io/projected/8f83c9d2-f263-4114-b375-f18f32d91231-kube-api-access-t8ql4\") pod \"nmstate-console-plugin-5c78fc5d65-vwcr5\" (UID: \"8f83c9d2-f263-4114-b375-f18f32d91231\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.917903 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-service-ca\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.918164 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-oauth-serving-cert\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.918187 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-serving-cert\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.918270 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-trusted-ca-bundle\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.918288 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-config\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.918314 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scxxv\" (UniqueName: \"kubernetes.io/projected/fae44155-c5f4-49f1-ab15-1365f1c77e0b-kube-api-access-scxxv\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.918346 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-oauth-config\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:22 crc kubenswrapper[4750]: I0214 14:08:22.935820 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.019518 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-service-ca\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.019560 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-oauth-serving-cert\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.019582 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-serving-cert\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.020481 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-oauth-serving-cert\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.020519 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-trusted-ca-bundle\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.020540 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-config\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.020730 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-service-ca\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.021242 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-trusted-ca-bundle\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.021814 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-config\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.021849 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scxxv\" (UniqueName: \"kubernetes.io/projected/fae44155-c5f4-49f1-ab15-1365f1c77e0b-kube-api-access-scxxv\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.021897 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-oauth-config\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.025248 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-serving-cert\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.027487 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-oauth-config\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.043003 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scxxv\" (UniqueName: \"kubernetes.io/projected/fae44155-c5f4-49f1-ab15-1365f1c77e0b-kube-api-access-scxxv\") pod \"console-f459dfbdf-9lrz2\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.046989 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.210525 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.231078 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5"] Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.265367 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m"] Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.386707 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6"] Feb 14 14:08:23 crc kubenswrapper[4750]: W0214 14:08:23.458757 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ca23af_ddcc_4041_8e76_7220d4e32212.slice/crio-bd3cb2af8238b35c7ca453760e735499d74f62e53968b3a2c8ceb84262865b3c WatchSource:0}: Error finding container bd3cb2af8238b35c7ca453760e735499d74f62e53968b3a2c8ceb84262865b3c: Status 404 returned error can't find the container with id bd3cb2af8238b35c7ca453760e735499d74f62e53968b3a2c8ceb84262865b3c Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.549911 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" event={"ID":"8f83c9d2-f263-4114-b375-f18f32d91231","Type":"ContainerStarted","Data":"b1363fea6c3c3c7c6218e6e00137e3ec21b3e381c73032bd54626e3b18c63dbf"} Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.551374 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m" event={"ID":"85849127-24fe-4e0b-9c43-c0d80d007c66","Type":"ContainerStarted","Data":"fcca2c8e0b4e19a48a900f44783355286a322c6cb7219991d265258607009d89"} Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.552562 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" event={"ID":"a1ca23af-ddcc-4041-8e76-7220d4e32212","Type":"ContainerStarted","Data":"bd3cb2af8238b35c7ca453760e735499d74f62e53968b3a2c8ceb84262865b3c"} Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.554079 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7fb7g" event={"ID":"d67741a3-cda0-41c4-ad20-dac649d22a2d","Type":"ContainerStarted","Data":"24aa3be9c663b75657bc6032d89d2111a5ef7ba8bc25db3908c8e4e8af78a408"} Feb 14 14:08:23 crc kubenswrapper[4750]: I0214 14:08:23.820726 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f459dfbdf-9lrz2"] Feb 14 14:08:24 crc kubenswrapper[4750]: I0214 14:08:24.568430 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f459dfbdf-9lrz2" event={"ID":"fae44155-c5f4-49f1-ab15-1365f1c77e0b","Type":"ContainerStarted","Data":"c88c212a9c0bf838f69e85f975167886c321d0454cadc47dea2ff7434fa0b108"} Feb 14 14:08:24 crc kubenswrapper[4750]: I0214 14:08:24.569477 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f459dfbdf-9lrz2" event={"ID":"fae44155-c5f4-49f1-ab15-1365f1c77e0b","Type":"ContainerStarted","Data":"2e42e50c09139523bfcbb6bccc8afa76df46ffcbec2c4e6c0d6e6fb247851a8f"} Feb 14 14:08:24 crc kubenswrapper[4750]: I0214 14:08:24.586548 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f459dfbdf-9lrz2" podStartSLOduration=2.586531382 podStartE2EDuration="2.586531382s" podCreationTimestamp="2026-02-14 14:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:08:24.584093992 +0000 UTC m=+976.610083483" watchObservedRunningTime="2026-02-14 14:08:24.586531382 +0000 UTC m=+976.612520863" Feb 14 14:08:26 crc kubenswrapper[4750]: I0214 14:08:26.727084 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:08:27 crc kubenswrapper[4750]: I0214 14:08:27.613465 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" event={"ID":"a1ca23af-ddcc-4041-8e76-7220d4e32212","Type":"ContainerStarted","Data":"b2a542c184704e8471999fa2481f28188130db349655cc72c509a96b34cabbf9"} Feb 14 14:08:27 crc kubenswrapper[4750]: I0214 14:08:27.614860 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:27 crc kubenswrapper[4750]: I0214 14:08:27.616285 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7fb7g" event={"ID":"d67741a3-cda0-41c4-ad20-dac649d22a2d","Type":"ContainerStarted","Data":"7a94a9080785d9762c4a770d49f7d189bfec472d148ea4d515411c8a8a1ca7ca"} Feb 14 14:08:27 crc kubenswrapper[4750]: I0214 14:08:27.616868 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:27 crc kubenswrapper[4750]: I0214 14:08:27.618649 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" event={"ID":"8f83c9d2-f263-4114-b375-f18f32d91231","Type":"ContainerStarted","Data":"9dcaa5ec2ac78e594b3117347de5c86b0b6dc4af83680d10ead28b5d20cb6fa5"} Feb 14 14:08:27 crc kubenswrapper[4750]: I0214 14:08:27.620552 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m" event={"ID":"85849127-24fe-4e0b-9c43-c0d80d007c66","Type":"ContainerStarted","Data":"e7158ae36c52a4e8fece4afb2bd5b8790bea4645e62541d6e35c87647d7e0a51"} Feb 14 14:08:27 crc kubenswrapper[4750]: I0214 14:08:27.637835 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" podStartSLOduration=2.622253334 podStartE2EDuration="5.637804997s" podCreationTimestamp="2026-02-14 14:08:22 +0000 UTC" firstStartedPulling="2026-02-14 14:08:23.460705287 +0000 UTC m=+975.486694778" lastFinishedPulling="2026-02-14 14:08:26.47625696 +0000 UTC m=+978.502246441" observedRunningTime="2026-02-14 14:08:27.634061819 +0000 UTC m=+979.660051300" watchObservedRunningTime="2026-02-14 14:08:27.637804997 +0000 UTC m=+979.663794478" Feb 14 14:08:27 crc kubenswrapper[4750]: I0214 14:08:27.650472 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vwcr5" podStartSLOduration=2.461163703 podStartE2EDuration="5.650449438s" podCreationTimestamp="2026-02-14 14:08:22 +0000 UTC" firstStartedPulling="2026-02-14 14:08:23.250263684 +0000 UTC m=+975.276253165" lastFinishedPulling="2026-02-14 14:08:26.439549399 +0000 UTC m=+978.465538900" observedRunningTime="2026-02-14 14:08:27.649090229 +0000 UTC m=+979.675079730" watchObservedRunningTime="2026-02-14 14:08:27.650449438 +0000 UTC m=+979.676438919" Feb 14 14:08:27 crc kubenswrapper[4750]: I0214 14:08:27.675327 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7fb7g" podStartSLOduration=2.116499428 podStartE2EDuration="5.67530323s" podCreationTimestamp="2026-02-14 14:08:22 +0000 UTC" firstStartedPulling="2026-02-14 14:08:22.907885554 +0000 UTC m=+974.933875035" lastFinishedPulling="2026-02-14 14:08:26.466689356 +0000 UTC m=+978.492678837" observedRunningTime="2026-02-14 14:08:27.667740733 +0000 UTC m=+979.693730254" watchObservedRunningTime="2026-02-14 14:08:27.67530323 +0000 UTC m=+979.701292731" Feb 14 14:08:29 crc kubenswrapper[4750]: I0214 14:08:29.640425 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m" event={"ID":"85849127-24fe-4e0b-9c43-c0d80d007c66","Type":"ContainerStarted","Data":"cf6151d0e05355b90a2c71cfc8da6de41f9b445d4828d04f58c212605c5930b0"} Feb 14 14:08:29 crc kubenswrapper[4750]: I0214 14:08:29.673514 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-v5j7m" podStartSLOduration=1.7996020769999999 podStartE2EDuration="7.673491642s" podCreationTimestamp="2026-02-14 14:08:22 +0000 UTC" firstStartedPulling="2026-02-14 14:08:23.313050231 +0000 UTC m=+975.339039712" lastFinishedPulling="2026-02-14 14:08:29.186939766 +0000 UTC m=+981.212929277" observedRunningTime="2026-02-14 14:08:29.672920366 +0000 UTC m=+981.698909857" watchObservedRunningTime="2026-02-14 14:08:29.673491642 +0000 UTC m=+981.699481123" Feb 14 14:08:30 crc kubenswrapper[4750]: I0214 14:08:30.129526 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:08:30 crc kubenswrapper[4750]: I0214 14:08:30.129597 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:08:32 crc kubenswrapper[4750]: I0214 14:08:32.825608 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7fb7g" Feb 14 14:08:33 crc kubenswrapper[4750]: I0214 14:08:33.211907 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:33 crc kubenswrapper[4750]: I0214 14:08:33.211971 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:33 crc kubenswrapper[4750]: I0214 14:08:33.218686 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:33 crc kubenswrapper[4750]: I0214 14:08:33.693988 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:08:33 crc kubenswrapper[4750]: I0214 14:08:33.775210 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-85c54b6d88-xgthb"] Feb 14 14:08:43 crc kubenswrapper[4750]: I0214 14:08:43.057845 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsxh6" Feb 14 14:08:58 crc kubenswrapper[4750]: I0214 14:08:58.835011 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-85c54b6d88-xgthb" podUID="21250970-b4be-492f-bd89-22784c31b8fa" containerName="console" containerID="cri-o://0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b" gracePeriod=15 Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.308662 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-85c54b6d88-xgthb_21250970-b4be-492f-bd89-22784c31b8fa/console/0.log" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.309028 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.421649 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-oauth-serving-cert\") pod \"21250970-b4be-492f-bd89-22784c31b8fa\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.421750 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-console-config\") pod \"21250970-b4be-492f-bd89-22784c31b8fa\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.421810 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-serving-cert\") pod \"21250970-b4be-492f-bd89-22784c31b8fa\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.421833 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-oauth-config\") pod \"21250970-b4be-492f-bd89-22784c31b8fa\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.421862 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4chhv\" (UniqueName: \"kubernetes.io/projected/21250970-b4be-492f-bd89-22784c31b8fa-kube-api-access-4chhv\") pod \"21250970-b4be-492f-bd89-22784c31b8fa\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.421920 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-trusted-ca-bundle\") pod \"21250970-b4be-492f-bd89-22784c31b8fa\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.421960 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-service-ca\") pod \"21250970-b4be-492f-bd89-22784c31b8fa\" (UID: \"21250970-b4be-492f-bd89-22784c31b8fa\") " Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.422787 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-service-ca" (OuterVolumeSpecName: "service-ca") pod "21250970-b4be-492f-bd89-22784c31b8fa" (UID: "21250970-b4be-492f-bd89-22784c31b8fa"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.422781 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-console-config" (OuterVolumeSpecName: "console-config") pod "21250970-b4be-492f-bd89-22784c31b8fa" (UID: "21250970-b4be-492f-bd89-22784c31b8fa"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.423666 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "21250970-b4be-492f-bd89-22784c31b8fa" (UID: "21250970-b4be-492f-bd89-22784c31b8fa"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.423694 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "21250970-b4be-492f-bd89-22784c31b8fa" (UID: "21250970-b4be-492f-bd89-22784c31b8fa"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.427008 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "21250970-b4be-492f-bd89-22784c31b8fa" (UID: "21250970-b4be-492f-bd89-22784c31b8fa"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.427524 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21250970-b4be-492f-bd89-22784c31b8fa-kube-api-access-4chhv" (OuterVolumeSpecName: "kube-api-access-4chhv") pod "21250970-b4be-492f-bd89-22784c31b8fa" (UID: "21250970-b4be-492f-bd89-22784c31b8fa"). InnerVolumeSpecName "kube-api-access-4chhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.431620 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "21250970-b4be-492f-bd89-22784c31b8fa" (UID: "21250970-b4be-492f-bd89-22784c31b8fa"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.525040 4750 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.525104 4750 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.525169 4750 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.525189 4750 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.525208 4750 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21250970-b4be-492f-bd89-22784c31b8fa-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.525229 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4chhv\" (UniqueName: \"kubernetes.io/projected/21250970-b4be-492f-bd89-22784c31b8fa-kube-api-access-4chhv\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.525250 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21250970-b4be-492f-bd89-22784c31b8fa-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.897959 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-85c54b6d88-xgthb_21250970-b4be-492f-bd89-22784c31b8fa/console/0.log" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.898010 4750 generic.go:334] "Generic (PLEG): container finished" podID="21250970-b4be-492f-bd89-22784c31b8fa" containerID="0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b" exitCode=2 Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.898039 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85c54b6d88-xgthb" event={"ID":"21250970-b4be-492f-bd89-22784c31b8fa","Type":"ContainerDied","Data":"0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b"} Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.898067 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85c54b6d88-xgthb" event={"ID":"21250970-b4be-492f-bd89-22784c31b8fa","Type":"ContainerDied","Data":"647020d28e8c577b2f42934ee46e1fabea573ccf8875c7d4c890807fedb9618a"} Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.898082 4750 scope.go:117] "RemoveContainer" containerID="0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.898128 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85c54b6d88-xgthb" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.926955 4750 scope.go:117] "RemoveContainer" containerID="0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b" Feb 14 14:08:59 crc kubenswrapper[4750]: E0214 14:08:59.927427 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b\": container with ID starting with 0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b not found: ID does not exist" containerID="0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.927480 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b"} err="failed to get container status \"0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b\": rpc error: code = NotFound desc = could not find container \"0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b\": container with ID starting with 0cb5e45610b74aac64d22bb5ff3ef947f70a2766e9d7db5f2e91d442b900599b not found: ID does not exist" Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.933829 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-85c54b6d88-xgthb"] Feb 14 14:08:59 crc kubenswrapper[4750]: I0214 14:08:59.942279 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-85c54b6d88-xgthb"] Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.129946 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.130342 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.130484 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.131698 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b148be393ce42f2c5c0534a347aaef624d5f007a262f94fbbdd37721a992213"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.131779 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://4b148be393ce42f2c5c0534a347aaef624d5f007a262f94fbbdd37721a992213" gracePeriod=600 Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.753898 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21250970-b4be-492f-bd89-22784c31b8fa" path="/var/lib/kubelet/pods/21250970-b4be-492f-bd89-22784c31b8fa/volumes" Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.908735 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="4b148be393ce42f2c5c0534a347aaef624d5f007a262f94fbbdd37721a992213" exitCode=0 Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.908808 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"4b148be393ce42f2c5c0534a347aaef624d5f007a262f94fbbdd37721a992213"} Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.908847 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"02bb963338838db28a9a316102bca11f6cc643e0b302795fd6980c83c74ec211"} Feb 14 14:09:00 crc kubenswrapper[4750]: I0214 14:09:00.908867 4750 scope.go:117] "RemoveContainer" containerID="0fcac30bb0adf08b1da63eb7c690c0a5ff45982d267b4e4d61b22d035f601ad8" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.720650 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97"] Feb 14 14:09:02 crc kubenswrapper[4750]: E0214 14:09:02.721517 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21250970-b4be-492f-bd89-22784c31b8fa" containerName="console" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.721542 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="21250970-b4be-492f-bd89-22784c31b8fa" containerName="console" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.721832 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="21250970-b4be-492f-bd89-22784c31b8fa" containerName="console" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.725349 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.729081 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.733972 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97"] Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.811184 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.812031 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgd9v\" (UniqueName: \"kubernetes.io/projected/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-kube-api-access-wgd9v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.812069 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.913423 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.913825 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgd9v\" (UniqueName: \"kubernetes.io/projected/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-kube-api-access-wgd9v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.913952 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.914395 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.914589 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:02 crc kubenswrapper[4750]: I0214 14:09:02.944570 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgd9v\" (UniqueName: \"kubernetes.io/projected/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-kube-api-access-wgd9v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:03 crc kubenswrapper[4750]: I0214 14:09:03.063771 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:03 crc kubenswrapper[4750]: I0214 14:09:03.646016 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97"] Feb 14 14:09:03 crc kubenswrapper[4750]: I0214 14:09:03.936231 4750 generic.go:334] "Generic (PLEG): container finished" podID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerID="9e5268fb56fc367ae23505cc77fc0951b21dae466e71fc29f0cb05f65d9dcd4d" exitCode=0 Feb 14 14:09:03 crc kubenswrapper[4750]: I0214 14:09:03.936304 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" event={"ID":"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c","Type":"ContainerDied","Data":"9e5268fb56fc367ae23505cc77fc0951b21dae466e71fc29f0cb05f65d9dcd4d"} Feb 14 14:09:03 crc kubenswrapper[4750]: I0214 14:09:03.936356 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" event={"ID":"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c","Type":"ContainerStarted","Data":"1c1959c1bd6e60d27a7bf91bcae25a8ceeb42461c87e8455d3b7a98d0cfcccdf"} Feb 14 14:09:05 crc kubenswrapper[4750]: I0214 14:09:05.955748 4750 generic.go:334] "Generic (PLEG): container finished" podID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerID="bc1ae5846d84cd6a65cf5806524a9a23b22f45849dd68c7582e6ba38c59a4edf" exitCode=0 Feb 14 14:09:05 crc kubenswrapper[4750]: I0214 14:09:05.955880 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" event={"ID":"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c","Type":"ContainerDied","Data":"bc1ae5846d84cd6a65cf5806524a9a23b22f45849dd68c7582e6ba38c59a4edf"} Feb 14 14:09:06 crc kubenswrapper[4750]: I0214 14:09:06.976144 4750 generic.go:334] "Generic (PLEG): container finished" podID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerID="d3a538e2870f4e91420f1c6237ae94a8d00c406fe48d0ce0a2a28912137db6c1" exitCode=0 Feb 14 14:09:06 crc kubenswrapper[4750]: I0214 14:09:06.976253 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" event={"ID":"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c","Type":"ContainerDied","Data":"d3a538e2870f4e91420f1c6237ae94a8d00c406fe48d0ce0a2a28912137db6c1"} Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.321287 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.423282 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-util\") pod \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.423501 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgd9v\" (UniqueName: \"kubernetes.io/projected/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-kube-api-access-wgd9v\") pod \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.423704 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-bundle\") pod \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\" (UID: \"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c\") " Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.425447 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-bundle" (OuterVolumeSpecName: "bundle") pod "2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" (UID: "2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.431051 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-kube-api-access-wgd9v" (OuterVolumeSpecName: "kube-api-access-wgd9v") pod "2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" (UID: "2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c"). InnerVolumeSpecName "kube-api-access-wgd9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.436551 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-util" (OuterVolumeSpecName: "util") pod "2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" (UID: "2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.525734 4750 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.525766 4750 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-util\") on node \"crc\" DevicePath \"\"" Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.525775 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgd9v\" (UniqueName: \"kubernetes.io/projected/2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c-kube-api-access-wgd9v\") on node \"crc\" DevicePath \"\"" Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.993475 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" event={"ID":"2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c","Type":"ContainerDied","Data":"1c1959c1bd6e60d27a7bf91bcae25a8ceeb42461c87e8455d3b7a98d0cfcccdf"} Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.993726 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c1959c1bd6e60d27a7bf91bcae25a8ceeb42461c87e8455d3b7a98d0cfcccdf" Feb 14 14:09:08 crc kubenswrapper[4750]: I0214 14:09:08.993572 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.910040 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb"] Feb 14 14:09:16 crc kubenswrapper[4750]: E0214 14:09:16.910885 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerName="util" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.910901 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerName="util" Feb 14 14:09:16 crc kubenswrapper[4750]: E0214 14:09:16.910922 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerName="pull" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.910930 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerName="pull" Feb 14 14:09:16 crc kubenswrapper[4750]: E0214 14:09:16.910951 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerName="extract" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.910961 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerName="extract" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.911136 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c" containerName="extract" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.911811 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.917313 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.917530 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.917720 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.918252 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.921515 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-h96qd" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.923142 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb"] Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.977997 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsxgb\" (UniqueName: \"kubernetes.io/projected/863644f3-8ec4-4391-a74e-7fe2d8dc4b3c-kube-api-access-gsxgb\") pod \"metallb-operator-controller-manager-86bdb8fc5c-ps8mb\" (UID: \"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c\") " pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.978405 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/863644f3-8ec4-4391-a74e-7fe2d8dc4b3c-apiservice-cert\") pod \"metallb-operator-controller-manager-86bdb8fc5c-ps8mb\" (UID: \"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c\") " pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:16 crc kubenswrapper[4750]: I0214 14:09:16.978546 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/863644f3-8ec4-4391-a74e-7fe2d8dc4b3c-webhook-cert\") pod \"metallb-operator-controller-manager-86bdb8fc5c-ps8mb\" (UID: \"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c\") " pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.080079 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxgb\" (UniqueName: \"kubernetes.io/projected/863644f3-8ec4-4391-a74e-7fe2d8dc4b3c-kube-api-access-gsxgb\") pod \"metallb-operator-controller-manager-86bdb8fc5c-ps8mb\" (UID: \"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c\") " pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.080149 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/863644f3-8ec4-4391-a74e-7fe2d8dc4b3c-apiservice-cert\") pod \"metallb-operator-controller-manager-86bdb8fc5c-ps8mb\" (UID: \"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c\") " pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.080270 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/863644f3-8ec4-4391-a74e-7fe2d8dc4b3c-webhook-cert\") pod \"metallb-operator-controller-manager-86bdb8fc5c-ps8mb\" (UID: \"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c\") " pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.086049 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/863644f3-8ec4-4391-a74e-7fe2d8dc4b3c-apiservice-cert\") pod \"metallb-operator-controller-manager-86bdb8fc5c-ps8mb\" (UID: \"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c\") " pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.086740 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/863644f3-8ec4-4391-a74e-7fe2d8dc4b3c-webhook-cert\") pod \"metallb-operator-controller-manager-86bdb8fc5c-ps8mb\" (UID: \"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c\") " pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.119794 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxgb\" (UniqueName: \"kubernetes.io/projected/863644f3-8ec4-4391-a74e-7fe2d8dc4b3c-kube-api-access-gsxgb\") pod \"metallb-operator-controller-manager-86bdb8fc5c-ps8mb\" (UID: \"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c\") " pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.228288 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd"] Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.229187 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.229471 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.242451 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bwfdz" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.242535 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.242738 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.267378 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd"] Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.298088 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8bs4\" (UniqueName: \"kubernetes.io/projected/bbf10811-6f76-4024-83ce-7263f00af6bb-kube-api-access-s8bs4\") pod \"metallb-operator-webhook-server-5967b4f7c5-67sfd\" (UID: \"bbf10811-6f76-4024-83ce-7263f00af6bb\") " pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.298218 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbf10811-6f76-4024-83ce-7263f00af6bb-apiservice-cert\") pod \"metallb-operator-webhook-server-5967b4f7c5-67sfd\" (UID: \"bbf10811-6f76-4024-83ce-7263f00af6bb\") " pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.298264 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbf10811-6f76-4024-83ce-7263f00af6bb-webhook-cert\") pod \"metallb-operator-webhook-server-5967b4f7c5-67sfd\" (UID: \"bbf10811-6f76-4024-83ce-7263f00af6bb\") " pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.399206 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8bs4\" (UniqueName: \"kubernetes.io/projected/bbf10811-6f76-4024-83ce-7263f00af6bb-kube-api-access-s8bs4\") pod \"metallb-operator-webhook-server-5967b4f7c5-67sfd\" (UID: \"bbf10811-6f76-4024-83ce-7263f00af6bb\") " pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.399478 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbf10811-6f76-4024-83ce-7263f00af6bb-apiservice-cert\") pod \"metallb-operator-webhook-server-5967b4f7c5-67sfd\" (UID: \"bbf10811-6f76-4024-83ce-7263f00af6bb\") " pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.399526 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbf10811-6f76-4024-83ce-7263f00af6bb-webhook-cert\") pod \"metallb-operator-webhook-server-5967b4f7c5-67sfd\" (UID: \"bbf10811-6f76-4024-83ce-7263f00af6bb\") " pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.403238 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbf10811-6f76-4024-83ce-7263f00af6bb-webhook-cert\") pod \"metallb-operator-webhook-server-5967b4f7c5-67sfd\" (UID: \"bbf10811-6f76-4024-83ce-7263f00af6bb\") " pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.403733 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbf10811-6f76-4024-83ce-7263f00af6bb-apiservice-cert\") pod \"metallb-operator-webhook-server-5967b4f7c5-67sfd\" (UID: \"bbf10811-6f76-4024-83ce-7263f00af6bb\") " pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.414875 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8bs4\" (UniqueName: \"kubernetes.io/projected/bbf10811-6f76-4024-83ce-7263f00af6bb-kube-api-access-s8bs4\") pod \"metallb-operator-webhook-server-5967b4f7c5-67sfd\" (UID: \"bbf10811-6f76-4024-83ce-7263f00af6bb\") " pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.596128 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:17 crc kubenswrapper[4750]: I0214 14:09:17.708165 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb"] Feb 14 14:09:17 crc kubenswrapper[4750]: W0214 14:09:17.714140 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod863644f3_8ec4_4391_a74e_7fe2d8dc4b3c.slice/crio-5f340611a548aac61a6afb2be58e790f1cb18428cc38b79c7abe7a9339e84ad0 WatchSource:0}: Error finding container 5f340611a548aac61a6afb2be58e790f1cb18428cc38b79c7abe7a9339e84ad0: Status 404 returned error can't find the container with id 5f340611a548aac61a6afb2be58e790f1cb18428cc38b79c7abe7a9339e84ad0 Feb 14 14:09:18 crc kubenswrapper[4750]: I0214 14:09:18.059887 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" event={"ID":"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c","Type":"ContainerStarted","Data":"5f340611a548aac61a6afb2be58e790f1cb18428cc38b79c7abe7a9339e84ad0"} Feb 14 14:09:18 crc kubenswrapper[4750]: I0214 14:09:18.103613 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd"] Feb 14 14:09:18 crc kubenswrapper[4750]: W0214 14:09:18.111803 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbf10811_6f76_4024_83ce_7263f00af6bb.slice/crio-0898ed9818d21062d8515599631042db7f638d87c877ba4b86031383af9bed7c WatchSource:0}: Error finding container 0898ed9818d21062d8515599631042db7f638d87c877ba4b86031383af9bed7c: Status 404 returned error can't find the container with id 0898ed9818d21062d8515599631042db7f638d87c877ba4b86031383af9bed7c Feb 14 14:09:19 crc kubenswrapper[4750]: I0214 14:09:19.079014 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" event={"ID":"bbf10811-6f76-4024-83ce-7263f00af6bb","Type":"ContainerStarted","Data":"0898ed9818d21062d8515599631042db7f638d87c877ba4b86031383af9bed7c"} Feb 14 14:09:24 crc kubenswrapper[4750]: I0214 14:09:24.114809 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" event={"ID":"bbf10811-6f76-4024-83ce-7263f00af6bb","Type":"ContainerStarted","Data":"725cae18e270d374c51b6845dc264aabab6a25e4565f4f30ada4fb21c7df50fc"} Feb 14 14:09:24 crc kubenswrapper[4750]: I0214 14:09:24.115349 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:24 crc kubenswrapper[4750]: I0214 14:09:24.115689 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" event={"ID":"863644f3-8ec4-4391-a74e-7fe2d8dc4b3c","Type":"ContainerStarted","Data":"83398fe607224c6264ff6eb1b33f8f9d5e8801bca6fc60769dd03acf1e4752cd"} Feb 14 14:09:24 crc kubenswrapper[4750]: I0214 14:09:24.115822 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:24 crc kubenswrapper[4750]: I0214 14:09:24.138971 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" podStartSLOduration=1.549490866 podStartE2EDuration="7.13895348s" podCreationTimestamp="2026-02-14 14:09:17 +0000 UTC" firstStartedPulling="2026-02-14 14:09:18.115594187 +0000 UTC m=+1030.141583658" lastFinishedPulling="2026-02-14 14:09:23.705056791 +0000 UTC m=+1035.731046272" observedRunningTime="2026-02-14 14:09:24.13334088 +0000 UTC m=+1036.159330361" watchObservedRunningTime="2026-02-14 14:09:24.13895348 +0000 UTC m=+1036.164942961" Feb 14 14:09:24 crc kubenswrapper[4750]: I0214 14:09:24.156206 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" podStartSLOduration=2.187910096 podStartE2EDuration="8.156173403s" podCreationTimestamp="2026-02-14 14:09:16 +0000 UTC" firstStartedPulling="2026-02-14 14:09:17.719638763 +0000 UTC m=+1029.745628244" lastFinishedPulling="2026-02-14 14:09:23.68790206 +0000 UTC m=+1035.713891551" observedRunningTime="2026-02-14 14:09:24.151981673 +0000 UTC m=+1036.177971154" watchObservedRunningTime="2026-02-14 14:09:24.156173403 +0000 UTC m=+1036.182162884" Feb 14 14:09:37 crc kubenswrapper[4750]: I0214 14:09:37.611047 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5967b4f7c5-67sfd" Feb 14 14:09:57 crc kubenswrapper[4750]: I0214 14:09:57.233016 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-86bdb8fc5c-ps8mb" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.031011 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9mm2g"] Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.034980 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.038721 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.038842 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-x5tcd" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.038967 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.042808 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj"] Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.043755 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.045983 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.053229 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj"] Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.058107 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fe99c26-de80-4b40-805c-95d804f86cf7-metrics-certs\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.058206 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-metrics\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.058231 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-frr-conf\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.058260 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-reloader\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.058285 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dcm\" (UniqueName: \"kubernetes.io/projected/9fe99c26-de80-4b40-805c-95d804f86cf7-kube-api-access-66dcm\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.058319 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-frr-sockets\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.058358 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9fe99c26-de80-4b40-805c-95d804f86cf7-frr-startup\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.123517 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wbx4h"] Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.125209 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.129506 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-v9m54"] Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.130217 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-znn6w" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.130417 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.130532 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.130639 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.130695 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.132768 4750 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.140450 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-v9m54"] Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.160825 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66dcm\" (UniqueName: \"kubernetes.io/projected/9fe99c26-de80-4b40-805c-95d804f86cf7-kube-api-access-66dcm\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.160901 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2892e47-7716-4ebc-86ef-376d952f3546-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-8ljxj\" (UID: \"b2892e47-7716-4ebc-86ef-376d952f3546\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.161680 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-frr-sockets\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.161731 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9fe99c26-de80-4b40-805c-95d804f86cf7-frr-startup\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.161752 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fe99c26-de80-4b40-805c-95d804f86cf7-metrics-certs\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.161774 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nm7z\" (UniqueName: \"kubernetes.io/projected/b2892e47-7716-4ebc-86ef-376d952f3546-kube-api-access-9nm7z\") pod \"frr-k8s-webhook-server-78b44bf5bb-8ljxj\" (UID: \"b2892e47-7716-4ebc-86ef-376d952f3546\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.161827 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-metrics\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.161846 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-frr-conf\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.161871 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-reloader\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.162099 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-frr-sockets\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.162153 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-reloader\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.162352 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-metrics\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.162535 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9fe99c26-de80-4b40-805c-95d804f86cf7-frr-conf\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.162633 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9fe99c26-de80-4b40-805c-95d804f86cf7-frr-startup\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.169417 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fe99c26-de80-4b40-805c-95d804f86cf7-metrics-certs\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.191763 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66dcm\" (UniqueName: \"kubernetes.io/projected/9fe99c26-de80-4b40-805c-95d804f86cf7-kube-api-access-66dcm\") pod \"frr-k8s-9mm2g\" (UID: \"9fe99c26-de80-4b40-805c-95d804f86cf7\") " pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.263692 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nm7z\" (UniqueName: \"kubernetes.io/projected/b2892e47-7716-4ebc-86ef-376d952f3546-kube-api-access-9nm7z\") pod \"frr-k8s-webhook-server-78b44bf5bb-8ljxj\" (UID: \"b2892e47-7716-4ebc-86ef-376d952f3546\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.263765 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-memberlist\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.263817 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ns4d\" (UniqueName: \"kubernetes.io/projected/da5b2754-b4d8-46a1-ad93-926e2ae005eb-kube-api-access-9ns4d\") pod \"controller-69bbfbf88f-v9m54\" (UID: \"da5b2754-b4d8-46a1-ad93-926e2ae005eb\") " pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.263836 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dgrm\" (UniqueName: \"kubernetes.io/projected/9f176239-5523-47f3-909c-e7c77b65acf5-kube-api-access-9dgrm\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.263851 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-metrics-certs\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.263882 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9f176239-5523-47f3-909c-e7c77b65acf5-metallb-excludel2\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.263919 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da5b2754-b4d8-46a1-ad93-926e2ae005eb-metrics-certs\") pod \"controller-69bbfbf88f-v9m54\" (UID: \"da5b2754-b4d8-46a1-ad93-926e2ae005eb\") " pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.263940 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2892e47-7716-4ebc-86ef-376d952f3546-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-8ljxj\" (UID: \"b2892e47-7716-4ebc-86ef-376d952f3546\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.263976 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da5b2754-b4d8-46a1-ad93-926e2ae005eb-cert\") pod \"controller-69bbfbf88f-v9m54\" (UID: \"da5b2754-b4d8-46a1-ad93-926e2ae005eb\") " pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.269718 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2892e47-7716-4ebc-86ef-376d952f3546-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-8ljxj\" (UID: \"b2892e47-7716-4ebc-86ef-376d952f3546\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.283821 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nm7z\" (UniqueName: \"kubernetes.io/projected/b2892e47-7716-4ebc-86ef-376d952f3546-kube-api-access-9nm7z\") pod \"frr-k8s-webhook-server-78b44bf5bb-8ljxj\" (UID: \"b2892e47-7716-4ebc-86ef-376d952f3546\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.353501 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.365228 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-memberlist\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.365316 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ns4d\" (UniqueName: \"kubernetes.io/projected/da5b2754-b4d8-46a1-ad93-926e2ae005eb-kube-api-access-9ns4d\") pod \"controller-69bbfbf88f-v9m54\" (UID: \"da5b2754-b4d8-46a1-ad93-926e2ae005eb\") " pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.365342 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dgrm\" (UniqueName: \"kubernetes.io/projected/9f176239-5523-47f3-909c-e7c77b65acf5-kube-api-access-9dgrm\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.365361 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-metrics-certs\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.365380 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9f176239-5523-47f3-909c-e7c77b65acf5-metallb-excludel2\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.365426 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da5b2754-b4d8-46a1-ad93-926e2ae005eb-metrics-certs\") pod \"controller-69bbfbf88f-v9m54\" (UID: \"da5b2754-b4d8-46a1-ad93-926e2ae005eb\") " pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.365484 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da5b2754-b4d8-46a1-ad93-926e2ae005eb-cert\") pod \"controller-69bbfbf88f-v9m54\" (UID: \"da5b2754-b4d8-46a1-ad93-926e2ae005eb\") " pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: E0214 14:09:58.366165 4750 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 14:09:58 crc kubenswrapper[4750]: E0214 14:09:58.366247 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-memberlist podName:9f176239-5523-47f3-909c-e7c77b65acf5 nodeName:}" failed. No retries permitted until 2026-02-14 14:09:58.866225193 +0000 UTC m=+1070.892214674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-memberlist") pod "speaker-wbx4h" (UID: "9f176239-5523-47f3-909c-e7c77b65acf5") : secret "metallb-memberlist" not found Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.366886 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9f176239-5523-47f3-909c-e7c77b65acf5-metallb-excludel2\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.369019 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-metrics-certs\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.369897 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da5b2754-b4d8-46a1-ad93-926e2ae005eb-metrics-certs\") pod \"controller-69bbfbf88f-v9m54\" (UID: \"da5b2754-b4d8-46a1-ad93-926e2ae005eb\") " pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.373193 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.374491 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da5b2754-b4d8-46a1-ad93-926e2ae005eb-cert\") pod \"controller-69bbfbf88f-v9m54\" (UID: \"da5b2754-b4d8-46a1-ad93-926e2ae005eb\") " pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.383020 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ns4d\" (UniqueName: \"kubernetes.io/projected/da5b2754-b4d8-46a1-ad93-926e2ae005eb-kube-api-access-9ns4d\") pod \"controller-69bbfbf88f-v9m54\" (UID: \"da5b2754-b4d8-46a1-ad93-926e2ae005eb\") " pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.383163 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dgrm\" (UniqueName: \"kubernetes.io/projected/9f176239-5523-47f3-909c-e7c77b65acf5-kube-api-access-9dgrm\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.455505 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.861074 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj"] Feb 14 14:09:58 crc kubenswrapper[4750]: W0214 14:09:58.862050 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2892e47_7716_4ebc_86ef_376d952f3546.slice/crio-8e152b201cec94dbd45fd7eb84ccfde38d9a519b3dc47dd16cf70229f85e2ddc WatchSource:0}: Error finding container 8e152b201cec94dbd45fd7eb84ccfde38d9a519b3dc47dd16cf70229f85e2ddc: Status 404 returned error can't find the container with id 8e152b201cec94dbd45fd7eb84ccfde38d9a519b3dc47dd16cf70229f85e2ddc Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.905907 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-memberlist\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:58 crc kubenswrapper[4750]: E0214 14:09:58.906106 4750 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 14:09:58 crc kubenswrapper[4750]: E0214 14:09:58.906244 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-memberlist podName:9f176239-5523-47f3-909c-e7c77b65acf5 nodeName:}" failed. No retries permitted until 2026-02-14 14:09:59.906219034 +0000 UTC m=+1071.932208515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-memberlist") pod "speaker-wbx4h" (UID: "9f176239-5523-47f3-909c-e7c77b65acf5") : secret "metallb-memberlist" not found Feb 14 14:09:58 crc kubenswrapper[4750]: I0214 14:09:58.941735 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-v9m54"] Feb 14 14:09:58 crc kubenswrapper[4750]: W0214 14:09:58.952322 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda5b2754_b4d8_46a1_ad93_926e2ae005eb.slice/crio-fd7cc65d5faee011e7e5decf74b77787220f8258dfd49c4193994382e5f24d15 WatchSource:0}: Error finding container fd7cc65d5faee011e7e5decf74b77787220f8258dfd49c4193994382e5f24d15: Status 404 returned error can't find the container with id fd7cc65d5faee011e7e5decf74b77787220f8258dfd49c4193994382e5f24d15 Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.412902 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-v9m54" event={"ID":"da5b2754-b4d8-46a1-ad93-926e2ae005eb","Type":"ContainerStarted","Data":"f1be02042b855901a5a7063627df3bd311ab491f15d14320c04e093041a5f035"} Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.412959 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-v9m54" event={"ID":"da5b2754-b4d8-46a1-ad93-926e2ae005eb","Type":"ContainerStarted","Data":"d4d1086db3db64dbb481941fd43197ff689507f6ff1fcf3468982ec906aa6c49"} Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.412975 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-v9m54" event={"ID":"da5b2754-b4d8-46a1-ad93-926e2ae005eb","Type":"ContainerStarted","Data":"fd7cc65d5faee011e7e5decf74b77787220f8258dfd49c4193994382e5f24d15"} Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.413073 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.414137 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" event={"ID":"b2892e47-7716-4ebc-86ef-376d952f3546","Type":"ContainerStarted","Data":"8e152b201cec94dbd45fd7eb84ccfde38d9a519b3dc47dd16cf70229f85e2ddc"} Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.415391 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerStarted","Data":"a5c924aa47307ef54f46154a1436182e6846b87a3663abd3bc8882932790d6c4"} Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.436310 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-v9m54" podStartSLOduration=1.436281101 podStartE2EDuration="1.436281101s" podCreationTimestamp="2026-02-14 14:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:09:59.432153863 +0000 UTC m=+1071.458143364" watchObservedRunningTime="2026-02-14 14:09:59.436281101 +0000 UTC m=+1071.462270582" Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.920761 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-memberlist\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.926876 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f176239-5523-47f3-909c-e7c77b65acf5-memberlist\") pod \"speaker-wbx4h\" (UID: \"9f176239-5523-47f3-909c-e7c77b65acf5\") " pod="metallb-system/speaker-wbx4h" Feb 14 14:09:59 crc kubenswrapper[4750]: I0214 14:09:59.945527 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wbx4h" Feb 14 14:10:00 crc kubenswrapper[4750]: I0214 14:10:00.435902 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wbx4h" event={"ID":"9f176239-5523-47f3-909c-e7c77b65acf5","Type":"ContainerStarted","Data":"72dad81f3a32c9dde5e23962732032dc884a986ef180c075934b6bd516ae9e7a"} Feb 14 14:10:00 crc kubenswrapper[4750]: I0214 14:10:00.436149 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wbx4h" event={"ID":"9f176239-5523-47f3-909c-e7c77b65acf5","Type":"ContainerStarted","Data":"8e39da7c73017d65eac18678ab5ce46718abf866a38c54d6d7fc0e2e2e585f83"} Feb 14 14:10:01 crc kubenswrapper[4750]: I0214 14:10:01.446294 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wbx4h" event={"ID":"9f176239-5523-47f3-909c-e7c77b65acf5","Type":"ContainerStarted","Data":"d07505bfc5e948703a72c9728b381e5981079bfc33974239833b8712d6413cba"} Feb 14 14:10:01 crc kubenswrapper[4750]: I0214 14:10:01.446563 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wbx4h" Feb 14 14:10:01 crc kubenswrapper[4750]: I0214 14:10:01.475981 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wbx4h" podStartSLOduration=3.475963161 podStartE2EDuration="3.475963161s" podCreationTimestamp="2026-02-14 14:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:10:01.472194723 +0000 UTC m=+1073.498184194" watchObservedRunningTime="2026-02-14 14:10:01.475963161 +0000 UTC m=+1073.501952642" Feb 14 14:10:06 crc kubenswrapper[4750]: I0214 14:10:06.513942 4750 generic.go:334] "Generic (PLEG): container finished" podID="9fe99c26-de80-4b40-805c-95d804f86cf7" containerID="705af0127c5ebcc3e819835bb366b38a4eecb223cff9e8e527ff6b988ddfc44b" exitCode=0 Feb 14 14:10:06 crc kubenswrapper[4750]: I0214 14:10:06.514128 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerDied","Data":"705af0127c5ebcc3e819835bb366b38a4eecb223cff9e8e527ff6b988ddfc44b"} Feb 14 14:10:06 crc kubenswrapper[4750]: I0214 14:10:06.517937 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" event={"ID":"b2892e47-7716-4ebc-86ef-376d952f3546","Type":"ContainerStarted","Data":"df5593688df3ae55873c33399d73fc5d1a1e610ca3dfc755d2e113ee28a726b3"} Feb 14 14:10:06 crc kubenswrapper[4750]: I0214 14:10:06.518252 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:10:06 crc kubenswrapper[4750]: I0214 14:10:06.571214 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" podStartSLOduration=1.545989832 podStartE2EDuration="8.571194495s" podCreationTimestamp="2026-02-14 14:09:58 +0000 UTC" firstStartedPulling="2026-02-14 14:09:58.865422116 +0000 UTC m=+1070.891411597" lastFinishedPulling="2026-02-14 14:10:05.890626779 +0000 UTC m=+1077.916616260" observedRunningTime="2026-02-14 14:10:06.570773413 +0000 UTC m=+1078.596762894" watchObservedRunningTime="2026-02-14 14:10:06.571194495 +0000 UTC m=+1078.597183976" Feb 14 14:10:07 crc kubenswrapper[4750]: I0214 14:10:07.528797 4750 generic.go:334] "Generic (PLEG): container finished" podID="9fe99c26-de80-4b40-805c-95d804f86cf7" containerID="4177ad62eb489babb6505c7660fcc56923561d1db46b81d8eb95192836e783b1" exitCode=0 Feb 14 14:10:07 crc kubenswrapper[4750]: I0214 14:10:07.528948 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerDied","Data":"4177ad62eb489babb6505c7660fcc56923561d1db46b81d8eb95192836e783b1"} Feb 14 14:10:08 crc kubenswrapper[4750]: I0214 14:10:08.540446 4750 generic.go:334] "Generic (PLEG): container finished" podID="9fe99c26-de80-4b40-805c-95d804f86cf7" containerID="7e30028abdf42646388bb007533203d4a9109d87852b4c7b611a5182db849b76" exitCode=0 Feb 14 14:10:08 crc kubenswrapper[4750]: I0214 14:10:08.540548 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerDied","Data":"7e30028abdf42646388bb007533203d4a9109d87852b4c7b611a5182db849b76"} Feb 14 14:10:09 crc kubenswrapper[4750]: I0214 14:10:09.559608 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerStarted","Data":"08b2533f20e53b443f49641b38df9fdcd9d2912e048c2c60d70e79008fff716c"} Feb 14 14:10:10 crc kubenswrapper[4750]: I0214 14:10:10.574241 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerStarted","Data":"dd0ed323cc73bffe58acd5e9484cf90fdf7f721d792abcab9492fdd800a35e00"} Feb 14 14:10:10 crc kubenswrapper[4750]: I0214 14:10:10.574642 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerStarted","Data":"8d2eb8286cd4340c5b93dbbc661473669c5332ce273fad5e1f7907d39521210f"} Feb 14 14:10:10 crc kubenswrapper[4750]: I0214 14:10:10.574661 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerStarted","Data":"01141697cb216fcda1a0962ebd1f2120e5cde9637fbe6d1165e5e5f3ad6d187b"} Feb 14 14:10:10 crc kubenswrapper[4750]: I0214 14:10:10.574678 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerStarted","Data":"9f27c4bb5248b07dd5f710aeee707a7b8d5741f88300c031db65540409775a81"} Feb 14 14:10:11 crc kubenswrapper[4750]: I0214 14:10:11.594758 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mm2g" event={"ID":"9fe99c26-de80-4b40-805c-95d804f86cf7","Type":"ContainerStarted","Data":"73c31cfd4531833d143cb5938ee3b6caa2616801bc58d07634fdc939cb6b9152"} Feb 14 14:10:11 crc kubenswrapper[4750]: I0214 14:10:11.595164 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:10:11 crc kubenswrapper[4750]: I0214 14:10:11.646242 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9mm2g" podStartSLOduration=6.328699329 podStartE2EDuration="13.646207992s" podCreationTimestamp="2026-02-14 14:09:58 +0000 UTC" firstStartedPulling="2026-02-14 14:09:58.547622276 +0000 UTC m=+1070.573611757" lastFinishedPulling="2026-02-14 14:10:05.865130939 +0000 UTC m=+1077.891120420" observedRunningTime="2026-02-14 14:10:11.642088884 +0000 UTC m=+1083.668078405" watchObservedRunningTime="2026-02-14 14:10:11.646207992 +0000 UTC m=+1083.672197513" Feb 14 14:10:13 crc kubenswrapper[4750]: I0214 14:10:13.354622 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:10:13 crc kubenswrapper[4750]: I0214 14:10:13.419926 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:10:18 crc kubenswrapper[4750]: I0214 14:10:18.356459 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9mm2g" Feb 14 14:10:18 crc kubenswrapper[4750]: I0214 14:10:18.381191 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8ljxj" Feb 14 14:10:18 crc kubenswrapper[4750]: I0214 14:10:18.465372 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-v9m54" Feb 14 14:10:19 crc kubenswrapper[4750]: I0214 14:10:19.949895 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wbx4h" Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.496583 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tm7xh"] Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.498044 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tm7xh" Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.519363 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tkkjr" Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.520133 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.526628 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.533664 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tm7xh"] Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.594291 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpf4j\" (UniqueName: \"kubernetes.io/projected/0f00286b-008c-4863-b623-9789f8fa3b7a-kube-api-access-lpf4j\") pod \"openstack-operator-index-tm7xh\" (UID: \"0f00286b-008c-4863-b623-9789f8fa3b7a\") " pod="openstack-operators/openstack-operator-index-tm7xh" Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.696302 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpf4j\" (UniqueName: \"kubernetes.io/projected/0f00286b-008c-4863-b623-9789f8fa3b7a-kube-api-access-lpf4j\") pod \"openstack-operator-index-tm7xh\" (UID: \"0f00286b-008c-4863-b623-9789f8fa3b7a\") " pod="openstack-operators/openstack-operator-index-tm7xh" Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.733169 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpf4j\" (UniqueName: \"kubernetes.io/projected/0f00286b-008c-4863-b623-9789f8fa3b7a-kube-api-access-lpf4j\") pod \"openstack-operator-index-tm7xh\" (UID: \"0f00286b-008c-4863-b623-9789f8fa3b7a\") " pod="openstack-operators/openstack-operator-index-tm7xh" Feb 14 14:10:22 crc kubenswrapper[4750]: I0214 14:10:22.832568 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tm7xh" Feb 14 14:10:23 crc kubenswrapper[4750]: I0214 14:10:23.299497 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tm7xh"] Feb 14 14:10:23 crc kubenswrapper[4750]: W0214 14:10:23.304861 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f00286b_008c_4863_b623_9789f8fa3b7a.slice/crio-8191a884aed25762781a2855983653012e10c5f7a3966cd43ad7852630294997 WatchSource:0}: Error finding container 8191a884aed25762781a2855983653012e10c5f7a3966cd43ad7852630294997: Status 404 returned error can't find the container with id 8191a884aed25762781a2855983653012e10c5f7a3966cd43ad7852630294997 Feb 14 14:10:23 crc kubenswrapper[4750]: I0214 14:10:23.709413 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tm7xh" event={"ID":"0f00286b-008c-4863-b623-9789f8fa3b7a","Type":"ContainerStarted","Data":"8191a884aed25762781a2855983653012e10c5f7a3966cd43ad7852630294997"} Feb 14 14:10:28 crc kubenswrapper[4750]: I0214 14:10:28.754506 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tm7xh" event={"ID":"0f00286b-008c-4863-b623-9789f8fa3b7a","Type":"ContainerStarted","Data":"acad296c4358d917ba803038b54ea2023d4c64653f2acb3599e555d3cb9a8f14"} Feb 14 14:10:28 crc kubenswrapper[4750]: I0214 14:10:28.787343 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tm7xh" podStartSLOduration=1.838156127 podStartE2EDuration="6.78731197s" podCreationTimestamp="2026-02-14 14:10:22 +0000 UTC" firstStartedPulling="2026-02-14 14:10:23.307621827 +0000 UTC m=+1095.333611348" lastFinishedPulling="2026-02-14 14:10:28.2567777 +0000 UTC m=+1100.282767191" observedRunningTime="2026-02-14 14:10:28.783975205 +0000 UTC m=+1100.809964726" watchObservedRunningTime="2026-02-14 14:10:28.78731197 +0000 UTC m=+1100.813301491" Feb 14 14:10:32 crc kubenswrapper[4750]: I0214 14:10:32.833005 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tm7xh" Feb 14 14:10:32 crc kubenswrapper[4750]: I0214 14:10:32.833461 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tm7xh" Feb 14 14:10:32 crc kubenswrapper[4750]: I0214 14:10:32.915557 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tm7xh" Feb 14 14:10:33 crc kubenswrapper[4750]: I0214 14:10:33.822033 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tm7xh" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.569728 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294"] Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.575487 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.577052 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zzf49" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.595160 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294"] Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.686495 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-bundle\") pod \"81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.686562 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9l57\" (UniqueName: \"kubernetes.io/projected/4846bde8-e74e-40a2-b3d7-98176fd2552b-kube-api-access-z9l57\") pod \"81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.686809 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-util\") pod \"81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.788643 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-util\") pod \"81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.788735 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-bundle\") pod \"81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.788774 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9l57\" (UniqueName: \"kubernetes.io/projected/4846bde8-e74e-40a2-b3d7-98176fd2552b-kube-api-access-z9l57\") pod \"81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.789175 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-util\") pod \"81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.789330 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-bundle\") pod \"81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.808960 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9l57\" (UniqueName: \"kubernetes.io/projected/4846bde8-e74e-40a2-b3d7-98176fd2552b-kube-api-access-z9l57\") pod \"81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:39 crc kubenswrapper[4750]: I0214 14:10:39.897393 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:40 crc kubenswrapper[4750]: I0214 14:10:40.354601 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294"] Feb 14 14:10:40 crc kubenswrapper[4750]: I0214 14:10:40.854957 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" event={"ID":"4846bde8-e74e-40a2-b3d7-98176fd2552b","Type":"ContainerStarted","Data":"04213c7ccc31a63dbed7318fb7e5a6bd6321939e95cba190d798baaa9127ea66"} Feb 14 14:10:40 crc kubenswrapper[4750]: I0214 14:10:40.855024 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" event={"ID":"4846bde8-e74e-40a2-b3d7-98176fd2552b","Type":"ContainerStarted","Data":"adea3229baef81a1216c33c6922558009f1649b5eb4afa923567c7400df94931"} Feb 14 14:10:41 crc kubenswrapper[4750]: I0214 14:10:41.861969 4750 generic.go:334] "Generic (PLEG): container finished" podID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerID="04213c7ccc31a63dbed7318fb7e5a6bd6321939e95cba190d798baaa9127ea66" exitCode=0 Feb 14 14:10:41 crc kubenswrapper[4750]: I0214 14:10:41.862017 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" event={"ID":"4846bde8-e74e-40a2-b3d7-98176fd2552b","Type":"ContainerDied","Data":"04213c7ccc31a63dbed7318fb7e5a6bd6321939e95cba190d798baaa9127ea66"} Feb 14 14:10:42 crc kubenswrapper[4750]: I0214 14:10:42.877292 4750 generic.go:334] "Generic (PLEG): container finished" podID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerID="5ea00fe5faec8da200017ae838a6625d66c6d2e76477f6abc05022afd0443697" exitCode=0 Feb 14 14:10:42 crc kubenswrapper[4750]: I0214 14:10:42.877362 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" event={"ID":"4846bde8-e74e-40a2-b3d7-98176fd2552b","Type":"ContainerDied","Data":"5ea00fe5faec8da200017ae838a6625d66c6d2e76477f6abc05022afd0443697"} Feb 14 14:10:43 crc kubenswrapper[4750]: E0214 14:10:43.267494 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4846bde8_e74e_40a2_b3d7_98176fd2552b.slice/crio-c6ac5a0f203961dd3b7c9b9a59954fd18cd9d7664e035568ae2a7effd872427d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4846bde8_e74e_40a2_b3d7_98176fd2552b.slice/crio-conmon-c6ac5a0f203961dd3b7c9b9a59954fd18cd9d7664e035568ae2a7effd872427d.scope\": RecentStats: unable to find data in memory cache]" Feb 14 14:10:43 crc kubenswrapper[4750]: I0214 14:10:43.889536 4750 generic.go:334] "Generic (PLEG): container finished" podID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerID="c6ac5a0f203961dd3b7c9b9a59954fd18cd9d7664e035568ae2a7effd872427d" exitCode=0 Feb 14 14:10:43 crc kubenswrapper[4750]: I0214 14:10:43.889894 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" event={"ID":"4846bde8-e74e-40a2-b3d7-98176fd2552b","Type":"ContainerDied","Data":"c6ac5a0f203961dd3b7c9b9a59954fd18cd9d7664e035568ae2a7effd872427d"} Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.282141 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.394684 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-util\") pod \"4846bde8-e74e-40a2-b3d7-98176fd2552b\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.394887 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9l57\" (UniqueName: \"kubernetes.io/projected/4846bde8-e74e-40a2-b3d7-98176fd2552b-kube-api-access-z9l57\") pod \"4846bde8-e74e-40a2-b3d7-98176fd2552b\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.395052 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-bundle\") pod \"4846bde8-e74e-40a2-b3d7-98176fd2552b\" (UID: \"4846bde8-e74e-40a2-b3d7-98176fd2552b\") " Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.395594 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-bundle" (OuterVolumeSpecName: "bundle") pod "4846bde8-e74e-40a2-b3d7-98176fd2552b" (UID: "4846bde8-e74e-40a2-b3d7-98176fd2552b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.403377 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4846bde8-e74e-40a2-b3d7-98176fd2552b-kube-api-access-z9l57" (OuterVolumeSpecName: "kube-api-access-z9l57") pod "4846bde8-e74e-40a2-b3d7-98176fd2552b" (UID: "4846bde8-e74e-40a2-b3d7-98176fd2552b"). InnerVolumeSpecName "kube-api-access-z9l57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.412897 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-util" (OuterVolumeSpecName: "util") pod "4846bde8-e74e-40a2-b3d7-98176fd2552b" (UID: "4846bde8-e74e-40a2-b3d7-98176fd2552b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.497230 4750 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-util\") on node \"crc\" DevicePath \"\"" Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.497284 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9l57\" (UniqueName: \"kubernetes.io/projected/4846bde8-e74e-40a2-b3d7-98176fd2552b-kube-api-access-z9l57\") on node \"crc\" DevicePath \"\"" Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.497298 4750 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4846bde8-e74e-40a2-b3d7-98176fd2552b-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.911577 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" event={"ID":"4846bde8-e74e-40a2-b3d7-98176fd2552b","Type":"ContainerDied","Data":"adea3229baef81a1216c33c6922558009f1649b5eb4afa923567c7400df94931"} Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.911633 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adea3229baef81a1216c33c6922558009f1649b5eb4afa923567c7400df94931" Feb 14 14:10:45 crc kubenswrapper[4750]: I0214 14:10:45.911754 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.621431 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9"] Feb 14 14:10:51 crc kubenswrapper[4750]: E0214 14:10:51.622092 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerName="pull" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.622103 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerName="pull" Feb 14 14:10:51 crc kubenswrapper[4750]: E0214 14:10:51.622134 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerName="extract" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.622139 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerName="extract" Feb 14 14:10:51 crc kubenswrapper[4750]: E0214 14:10:51.622158 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerName="util" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.622165 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerName="util" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.622304 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="4846bde8-e74e-40a2-b3d7-98176fd2552b" containerName="extract" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.622798 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.624616 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-q76v6" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.701985 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9"] Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.716855 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25scd\" (UniqueName: \"kubernetes.io/projected/ec0b7c77-5944-4b0e-bbd1-af0e3e14da56-kube-api-access-25scd\") pod \"openstack-operator-controller-init-7b948d557b-gvsr9\" (UID: \"ec0b7c77-5944-4b0e-bbd1-af0e3e14da56\") " pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.818845 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25scd\" (UniqueName: \"kubernetes.io/projected/ec0b7c77-5944-4b0e-bbd1-af0e3e14da56-kube-api-access-25scd\") pod \"openstack-operator-controller-init-7b948d557b-gvsr9\" (UID: \"ec0b7c77-5944-4b0e-bbd1-af0e3e14da56\") " pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.862182 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25scd\" (UniqueName: \"kubernetes.io/projected/ec0b7c77-5944-4b0e-bbd1-af0e3e14da56-kube-api-access-25scd\") pod \"openstack-operator-controller-init-7b948d557b-gvsr9\" (UID: \"ec0b7c77-5944-4b0e-bbd1-af0e3e14da56\") " pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" Feb 14 14:10:51 crc kubenswrapper[4750]: I0214 14:10:51.946225 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" Feb 14 14:10:52 crc kubenswrapper[4750]: I0214 14:10:52.411032 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9"] Feb 14 14:10:52 crc kubenswrapper[4750]: W0214 14:10:52.424054 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec0b7c77_5944_4b0e_bbd1_af0e3e14da56.slice/crio-0259bad5e728bceeff5b3a25e84e8238861bc46049507b978e82ba116b1048aa WatchSource:0}: Error finding container 0259bad5e728bceeff5b3a25e84e8238861bc46049507b978e82ba116b1048aa: Status 404 returned error can't find the container with id 0259bad5e728bceeff5b3a25e84e8238861bc46049507b978e82ba116b1048aa Feb 14 14:10:52 crc kubenswrapper[4750]: I0214 14:10:52.991264 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" event={"ID":"ec0b7c77-5944-4b0e-bbd1-af0e3e14da56","Type":"ContainerStarted","Data":"0259bad5e728bceeff5b3a25e84e8238861bc46049507b978e82ba116b1048aa"} Feb 14 14:10:57 crc kubenswrapper[4750]: I0214 14:10:57.025796 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" event={"ID":"ec0b7c77-5944-4b0e-bbd1-af0e3e14da56","Type":"ContainerStarted","Data":"3e9429308c5f5466ff3e37c25f856eb114e1b43ae884c6e6a8f44e972bf0d91c"} Feb 14 14:10:57 crc kubenswrapper[4750]: I0214 14:10:57.026572 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" Feb 14 14:10:57 crc kubenswrapper[4750]: I0214 14:10:57.068227 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" podStartSLOduration=2.441894485 podStartE2EDuration="6.068192632s" podCreationTimestamp="2026-02-14 14:10:51 +0000 UTC" firstStartedPulling="2026-02-14 14:10:52.428388906 +0000 UTC m=+1124.454378397" lastFinishedPulling="2026-02-14 14:10:56.054687053 +0000 UTC m=+1128.080676544" observedRunningTime="2026-02-14 14:10:57.061625274 +0000 UTC m=+1129.087614755" watchObservedRunningTime="2026-02-14 14:10:57.068192632 +0000 UTC m=+1129.094182133" Feb 14 14:11:00 crc kubenswrapper[4750]: I0214 14:11:00.128993 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:11:00 crc kubenswrapper[4750]: I0214 14:11:00.129507 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:11:01 crc kubenswrapper[4750]: I0214 14:11:01.949958 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7b948d557b-gvsr9" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.328698 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.330408 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.332439 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-7gz4k" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.336843 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.338175 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.340917 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vp9tb" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.345590 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.378493 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.413268 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.414382 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.420963 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-bbzz6" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.425790 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.443851 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.445284 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.448097 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hwf9d" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.449132 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.454624 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.457668 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-f9kh8" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.458042 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.460412 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.465003 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-nxbdv" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.473210 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.485311 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt86l\" (UniqueName: \"kubernetes.io/projected/ac57dc96-afbc-4c7c-bd3c-9b763974a1c9-kube-api-access-vt86l\") pod \"cinder-operator-controller-manager-5d946d989d-nvfv2\" (UID: \"ac57dc96-afbc-4c7c-bd3c-9b763974a1c9\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.485599 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxw6p\" (UniqueName: \"kubernetes.io/projected/4b1ad62d-48a6-4228-8de2-3710bd15b7f4-kube-api-access-pxw6p\") pod \"barbican-operator-controller-manager-868647ff47-77cn7\" (UID: \"4b1ad62d-48a6-4228-8de2-3710bd15b7f4\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.489262 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.500859 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.520663 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.521644 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.530943 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-thm6m" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.531141 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.531470 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.551514 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.558438 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.560224 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-22l4c" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.576685 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.577790 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.591405 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hj9j\" (UniqueName: \"kubernetes.io/projected/8fbd7079-b94a-4632-bfd7-5d550d6cbe1d-kube-api-access-8hj9j\") pod \"horizon-operator-controller-manager-5b9b8895d5-wttvx\" (UID: \"8fbd7079-b94a-4632-bfd7-5d550d6cbe1d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.591734 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v78n\" (UniqueName: \"kubernetes.io/projected/27f7394c-167e-4dda-bd08-b2d2a49d5f13-kube-api-access-7v78n\") pod \"heat-operator-controller-manager-69f49c598c-7s6qb\" (UID: \"27f7394c-167e-4dda-bd08-b2d2a49d5f13\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.591807 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt86l\" (UniqueName: \"kubernetes.io/projected/ac57dc96-afbc-4c7c-bd3c-9b763974a1c9-kube-api-access-vt86l\") pod \"cinder-operator-controller-manager-5d946d989d-nvfv2\" (UID: \"ac57dc96-afbc-4c7c-bd3c-9b763974a1c9\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.591835 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxw6p\" (UniqueName: \"kubernetes.io/projected/4b1ad62d-48a6-4228-8de2-3710bd15b7f4-kube-api-access-pxw6p\") pod \"barbican-operator-controller-manager-868647ff47-77cn7\" (UID: \"4b1ad62d-48a6-4228-8de2-3710bd15b7f4\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.591944 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g8nx\" (UniqueName: \"kubernetes.io/projected/b559262e-cbcd-486e-8602-ece46ff1ed14-kube-api-access-5g8nx\") pod \"glance-operator-controller-manager-77987464f4-sh6f9\" (UID: \"b559262e-cbcd-486e-8602-ece46ff1ed14\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.591987 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2cg7\" (UniqueName: \"kubernetes.io/projected/ff1e2ca9-56b1-4511-b59b-14256631d65f-kube-api-access-x2cg7\") pod \"designate-operator-controller-manager-6d8bf5c495-bmlpf\" (UID: \"ff1e2ca9-56b1-4511-b59b-14256631d65f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.592463 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-875bj" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.618629 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.619622 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.640296 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxw6p\" (UniqueName: \"kubernetes.io/projected/4b1ad62d-48a6-4228-8de2-3710bd15b7f4-kube-api-access-pxw6p\") pod \"barbican-operator-controller-manager-868647ff47-77cn7\" (UID: \"4b1ad62d-48a6-4228-8de2-3710bd15b7f4\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.643878 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-8khm6" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.670740 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt86l\" (UniqueName: \"kubernetes.io/projected/ac57dc96-afbc-4c7c-bd3c-9b763974a1c9-kube-api-access-vt86l\") pod \"cinder-operator-controller-manager-5d946d989d-nvfv2\" (UID: \"ac57dc96-afbc-4c7c-bd3c-9b763974a1c9\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.681474 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.683211 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.703528 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.706357 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2cg7\" (UniqueName: \"kubernetes.io/projected/ff1e2ca9-56b1-4511-b59b-14256631d65f-kube-api-access-x2cg7\") pod \"designate-operator-controller-manager-6d8bf5c495-bmlpf\" (UID: \"ff1e2ca9-56b1-4511-b59b-14256631d65f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.706415 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7n8g\" (UniqueName: \"kubernetes.io/projected/c04784fa-abc5-4c4c-b891-6d73db5a17e1-kube-api-access-f7n8g\") pod \"manila-operator-controller-manager-54f6768c69-5mm9s\" (UID: \"c04784fa-abc5-4c4c-b891-6d73db5a17e1\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.706435 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hj9j\" (UniqueName: \"kubernetes.io/projected/8fbd7079-b94a-4632-bfd7-5d550d6cbe1d-kube-api-access-8hj9j\") pod \"horizon-operator-controller-manager-5b9b8895d5-wttvx\" (UID: \"8fbd7079-b94a-4632-bfd7-5d550d6cbe1d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.706456 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdsgv\" (UniqueName: \"kubernetes.io/projected/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-kube-api-access-rdsgv\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.706475 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v78n\" (UniqueName: \"kubernetes.io/projected/27f7394c-167e-4dda-bd08-b2d2a49d5f13-kube-api-access-7v78n\") pod \"heat-operator-controller-manager-69f49c598c-7s6qb\" (UID: \"27f7394c-167e-4dda-bd08-b2d2a49d5f13\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.706499 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2r2c\" (UniqueName: \"kubernetes.io/projected/253c171a-c8f6-47d7-9490-a91d08ecd980-kube-api-access-t2r2c\") pod \"ironic-operator-controller-manager-554564d7fc-lgsxc\" (UID: \"253c171a-c8f6-47d7-9490-a91d08ecd980\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.706529 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.706602 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g8nx\" (UniqueName: \"kubernetes.io/projected/b559262e-cbcd-486e-8602-ece46ff1ed14-kube-api-access-5g8nx\") pod \"glance-operator-controller-manager-77987464f4-sh6f9\" (UID: \"b559262e-cbcd-486e-8602-ece46ff1ed14\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.707755 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.716554 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.727853 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.729441 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.740076 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-q92vk" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.752829 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v78n\" (UniqueName: \"kubernetes.io/projected/27f7394c-167e-4dda-bd08-b2d2a49d5f13-kube-api-access-7v78n\") pod \"heat-operator-controller-manager-69f49c598c-7s6qb\" (UID: \"27f7394c-167e-4dda-bd08-b2d2a49d5f13\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.763741 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g8nx\" (UniqueName: \"kubernetes.io/projected/b559262e-cbcd-486e-8602-ece46ff1ed14-kube-api-access-5g8nx\") pod \"glance-operator-controller-manager-77987464f4-sh6f9\" (UID: \"b559262e-cbcd-486e-8602-ece46ff1ed14\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.777791 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2cg7\" (UniqueName: \"kubernetes.io/projected/ff1e2ca9-56b1-4511-b59b-14256631d65f-kube-api-access-x2cg7\") pod \"designate-operator-controller-manager-6d8bf5c495-bmlpf\" (UID: \"ff1e2ca9-56b1-4511-b59b-14256631d65f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.780103 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hj9j\" (UniqueName: \"kubernetes.io/projected/8fbd7079-b94a-4632-bfd7-5d550d6cbe1d-kube-api-access-8hj9j\") pod \"horizon-operator-controller-manager-5b9b8895d5-wttvx\" (UID: \"8fbd7079-b94a-4632-bfd7-5d550d6cbe1d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.799184 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.799826 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.800225 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.800660 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.805518 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-nxdpk" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.811155 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.811414 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2kt\" (UniqueName: \"kubernetes.io/projected/3e79ecca-328f-4049-945b-a506ba6d56f9-kube-api-access-jt2kt\") pod \"keystone-operator-controller-manager-b4d948c87-gnmwc\" (UID: \"3e79ecca-328f-4049-945b-a506ba6d56f9\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.811533 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfgkh\" (UniqueName: \"kubernetes.io/projected/782775e0-d41f-4e9d-b6e5-4640a473b64a-kube-api-access-xfgkh\") pod \"nova-operator-controller-manager-567668f5cf-dq8x7\" (UID: \"782775e0-d41f-4e9d-b6e5-4640a473b64a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.811642 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bsc9\" (UniqueName: \"kubernetes.io/projected/413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54-kube-api-access-2bsc9\") pod \"mariadb-operator-controller-manager-6994f66f48-87m54\" (UID: \"413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.811789 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7n8g\" (UniqueName: \"kubernetes.io/projected/c04784fa-abc5-4c4c-b891-6d73db5a17e1-kube-api-access-f7n8g\") pod \"manila-operator-controller-manager-54f6768c69-5mm9s\" (UID: \"c04784fa-abc5-4c4c-b891-6d73db5a17e1\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.811874 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdsgv\" (UniqueName: \"kubernetes.io/projected/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-kube-api-access-rdsgv\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.811969 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2r2c\" (UniqueName: \"kubernetes.io/projected/253c171a-c8f6-47d7-9490-a91d08ecd980-kube-api-access-t2r2c\") pod \"ironic-operator-controller-manager-554564d7fc-lgsxc\" (UID: \"253c171a-c8f6-47d7-9490-a91d08ecd980\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" Feb 14 14:11:21 crc kubenswrapper[4750]: E0214 14:11:21.812897 4750 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:21 crc kubenswrapper[4750]: E0214 14:11:21.812965 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert podName:6e2542fa-3b0a-4a09-8d18-54037ebbbdf8 nodeName:}" failed. No retries permitted until 2026-02-14 14:11:22.312949066 +0000 UTC m=+1154.338938547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert") pod "infra-operator-controller-manager-79d975b745-vjpnq" (UID: "6e2542fa-3b0a-4a09-8d18-54037ebbbdf8") : secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.814781 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.815749 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.830345 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.834234 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-sgvws" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.847429 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.855788 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7n8g\" (UniqueName: \"kubernetes.io/projected/c04784fa-abc5-4c4c-b891-6d73db5a17e1-kube-api-access-f7n8g\") pod \"manila-operator-controller-manager-54f6768c69-5mm9s\" (UID: \"c04784fa-abc5-4c4c-b891-6d73db5a17e1\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.856812 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2r2c\" (UniqueName: \"kubernetes.io/projected/253c171a-c8f6-47d7-9490-a91d08ecd980-kube-api-access-t2r2c\") pod \"ironic-operator-controller-manager-554564d7fc-lgsxc\" (UID: \"253c171a-c8f6-47d7-9490-a91d08ecd980\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.939239 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.946653 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdsgv\" (UniqueName: \"kubernetes.io/projected/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-kube-api-access-rdsgv\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.960781 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7"] Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.969449 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd6k2\" (UniqueName: \"kubernetes.io/projected/b146fda3-a4a4-4ebd-9ac0-32016dac7650-kube-api-access-xd6k2\") pod \"neutron-operator-controller-manager-64ddbf8bb-92td7\" (UID: \"b146fda3-a4a4-4ebd-9ac0-32016dac7650\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.969533 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt2kt\" (UniqueName: \"kubernetes.io/projected/3e79ecca-328f-4049-945b-a506ba6d56f9-kube-api-access-jt2kt\") pod \"keystone-operator-controller-manager-b4d948c87-gnmwc\" (UID: \"3e79ecca-328f-4049-945b-a506ba6d56f9\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.969599 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfgkh\" (UniqueName: \"kubernetes.io/projected/782775e0-d41f-4e9d-b6e5-4640a473b64a-kube-api-access-xfgkh\") pod \"nova-operator-controller-manager-567668f5cf-dq8x7\" (UID: \"782775e0-d41f-4e9d-b6e5-4640a473b64a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" Feb 14 14:11:21 crc kubenswrapper[4750]: I0214 14:11:21.969640 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bsc9\" (UniqueName: \"kubernetes.io/projected/413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54-kube-api-access-2bsc9\") pod \"mariadb-operator-controller-manager-6994f66f48-87m54\" (UID: \"413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.029605 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.057096 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.057755 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bsc9\" (UniqueName: \"kubernetes.io/projected/413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54-kube-api-access-2bsc9\") pod \"mariadb-operator-controller-manager-6994f66f48-87m54\" (UID: \"413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.058143 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfgkh\" (UniqueName: \"kubernetes.io/projected/782775e0-d41f-4e9d-b6e5-4640a473b64a-kube-api-access-xfgkh\") pod \"nova-operator-controller-manager-567668f5cf-dq8x7\" (UID: \"782775e0-d41f-4e9d-b6e5-4640a473b64a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.058348 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt2kt\" (UniqueName: \"kubernetes.io/projected/3e79ecca-328f-4049-945b-a506ba6d56f9-kube-api-access-jt2kt\") pod \"keystone-operator-controller-manager-b4d948c87-gnmwc\" (UID: \"3e79ecca-328f-4049-945b-a506ba6d56f9\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.062327 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.068896 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.071574 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.075974 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd6k2\" (UniqueName: \"kubernetes.io/projected/b146fda3-a4a4-4ebd-9ac0-32016dac7650-kube-api-access-xd6k2\") pod \"neutron-operator-controller-manager-64ddbf8bb-92td7\" (UID: \"b146fda3-a4a4-4ebd-9ac0-32016dac7650\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.085670 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-57ht2" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.107416 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.112975 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd6k2\" (UniqueName: \"kubernetes.io/projected/b146fda3-a4a4-4ebd-9ac0-32016dac7650-kube-api-access-xd6k2\") pod \"neutron-operator-controller-manager-64ddbf8bb-92td7\" (UID: \"b146fda3-a4a4-4ebd-9ac0-32016dac7650\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.125309 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.129192 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.136879 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.137176 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dcxcc" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.137434 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.138648 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.139819 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kds7q" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.159763 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.160893 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.169156 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ztfhq" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.172051 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.177011 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpw4z\" (UniqueName: \"kubernetes.io/projected/64bc51de-7c3f-406e-899b-cbf5339658ea-kube-api-access-xpw4z\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.179525 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.179594 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sbhw\" (UniqueName: \"kubernetes.io/projected/32e5c795-f58a-41c2-8b75-53ef0f77bef8-kube-api-access-9sbhw\") pod \"placement-operator-controller-manager-8497b45c89-5dvwn\" (UID: \"32e5c795-f58a-41c2-8b75-53ef0f77bef8\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.179666 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq89j\" (UniqueName: \"kubernetes.io/projected/585cf590-6dcc-49d2-a01f-9d6fa1612328-kube-api-access-zq89j\") pod \"ovn-operator-controller-manager-d44cf6b75-nzqxf\" (UID: \"585cf590-6dcc-49d2-a01f-9d6fa1612328\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.179744 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcncx\" (UniqueName: \"kubernetes.io/projected/60e32c9b-3598-476f-85d8-7cab15748de5-kube-api-access-fcncx\") pod \"octavia-operator-controller-manager-69f8888797-dnj8w\" (UID: \"60e32c9b-3598-476f-85d8-7cab15748de5\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.193893 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.216845 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.218653 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.222797 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kvv6z" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.228940 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.250320 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.251454 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.254751 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6278g" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.272278 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.285341 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpw4z\" (UniqueName: \"kubernetes.io/projected/64bc51de-7c3f-406e-899b-cbf5339658ea-kube-api-access-xpw4z\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.285711 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.285786 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sbhw\" (UniqueName: \"kubernetes.io/projected/32e5c795-f58a-41c2-8b75-53ef0f77bef8-kube-api-access-9sbhw\") pod \"placement-operator-controller-manager-8497b45c89-5dvwn\" (UID: \"32e5c795-f58a-41c2-8b75-53ef0f77bef8\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.285831 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq89j\" (UniqueName: \"kubernetes.io/projected/585cf590-6dcc-49d2-a01f-9d6fa1612328-kube-api-access-zq89j\") pod \"ovn-operator-controller-manager-d44cf6b75-nzqxf\" (UID: \"585cf590-6dcc-49d2-a01f-9d6fa1612328\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.285903 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcncx\" (UniqueName: \"kubernetes.io/projected/60e32c9b-3598-476f-85d8-7cab15748de5-kube-api-access-fcncx\") pod \"octavia-operator-controller-manager-69f8888797-dnj8w\" (UID: \"60e32c9b-3598-476f-85d8-7cab15748de5\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.285953 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ntx4\" (UniqueName: \"kubernetes.io/projected/8d3cff8a-de26-4c32-96b3-080e797d527f-kube-api-access-9ntx4\") pod \"telemetry-operator-controller-manager-6fdcfd45d9-rqdd9\" (UID: \"8d3cff8a-de26-4c32-96b3-080e797d527f\") " pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.285997 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk58h\" (UniqueName: \"kubernetes.io/projected/cfe0761e-63fc-480a-bfe7-c8c3e78d3785-kube-api-access-zk58h\") pod \"swift-operator-controller-manager-68f46476f-s7nsg\" (UID: \"cfe0761e-63fc-480a-bfe7-c8c3e78d3785\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.286299 4750 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.286375 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert podName:64bc51de-7c3f-406e-899b-cbf5339658ea nodeName:}" failed. No retries permitted until 2026-02-14 14:11:22.786354711 +0000 UTC m=+1154.812344282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" (UID: "64bc51de-7c3f-406e-899b-cbf5339658ea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.301166 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpw4z\" (UniqueName: \"kubernetes.io/projected/64bc51de-7c3f-406e-899b-cbf5339658ea-kube-api-access-xpw4z\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.309725 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.311366 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.320324 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcncx\" (UniqueName: \"kubernetes.io/projected/60e32c9b-3598-476f-85d8-7cab15748de5-kube-api-access-fcncx\") pod \"octavia-operator-controller-manager-69f8888797-dnj8w\" (UID: \"60e32c9b-3598-476f-85d8-7cab15748de5\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.320337 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sbhw\" (UniqueName: \"kubernetes.io/projected/32e5c795-f58a-41c2-8b75-53ef0f77bef8-kube-api-access-9sbhw\") pod \"placement-operator-controller-manager-8497b45c89-5dvwn\" (UID: \"32e5c795-f58a-41c2-8b75-53ef0f77bef8\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.320669 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq89j\" (UniqueName: \"kubernetes.io/projected/585cf590-6dcc-49d2-a01f-9d6fa1612328-kube-api-access-zq89j\") pod \"ovn-operator-controller-manager-d44cf6b75-nzqxf\" (UID: \"585cf590-6dcc-49d2-a01f-9d6fa1612328\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.323222 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zx56k"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.324292 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.326218 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-bxmxv" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.332672 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.340091 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zx56k"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.340457 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.346370 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.347496 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.352758 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.354834 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6kmgw" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.374843 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.382272 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.383361 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.384944 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.385243 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.385400 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xrm79" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.389318 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgjn2\" (UniqueName: \"kubernetes.io/projected/34ecdca8-2927-432e-b770-c0c0d0b750e9-kube-api-access-jgjn2\") pod \"watcher-operator-controller-manager-5db88f68c-xt5f9\" (UID: \"34ecdca8-2927-432e-b770-c0c0d0b750e9\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.389562 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sft8\" (UniqueName: \"kubernetes.io/projected/5ffa3c9f-1ef0-43df-b99c-3dd0c918f129-kube-api-access-6sft8\") pod \"test-operator-controller-manager-7866795846-zx56k\" (UID: \"5ffa3c9f-1ef0-43df-b99c-3dd0c918f129\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.389641 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ntx4\" (UniqueName: \"kubernetes.io/projected/8d3cff8a-de26-4c32-96b3-080e797d527f-kube-api-access-9ntx4\") pod \"telemetry-operator-controller-manager-6fdcfd45d9-rqdd9\" (UID: \"8d3cff8a-de26-4c32-96b3-080e797d527f\") " pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.389748 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk58h\" (UniqueName: \"kubernetes.io/projected/cfe0761e-63fc-480a-bfe7-c8c3e78d3785-kube-api-access-zk58h\") pod \"swift-operator-controller-manager-68f46476f-s7nsg\" (UID: \"cfe0761e-63fc-480a-bfe7-c8c3e78d3785\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.389831 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.390013 4750 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.390148 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert podName:6e2542fa-3b0a-4a09-8d18-54037ebbbdf8 nodeName:}" failed. No retries permitted until 2026-02-14 14:11:23.390133952 +0000 UTC m=+1155.416123433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert") pod "infra-operator-controller-manager-79d975b745-vjpnq" (UID: "6e2542fa-3b0a-4a09-8d18-54037ebbbdf8") : secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.408559 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ntx4\" (UniqueName: \"kubernetes.io/projected/8d3cff8a-de26-4c32-96b3-080e797d527f-kube-api-access-9ntx4\") pod \"telemetry-operator-controller-manager-6fdcfd45d9-rqdd9\" (UID: \"8d3cff8a-de26-4c32-96b3-080e797d527f\") " pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.418433 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.419651 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk58h\" (UniqueName: \"kubernetes.io/projected/cfe0761e-63fc-480a-bfe7-c8c3e78d3785-kube-api-access-zk58h\") pod \"swift-operator-controller-manager-68f46476f-s7nsg\" (UID: \"cfe0761e-63fc-480a-bfe7-c8c3e78d3785\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.443688 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.444443 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.445601 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.448630 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-46s4x" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.454026 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.497380 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sft8\" (UniqueName: \"kubernetes.io/projected/5ffa3c9f-1ef0-43df-b99c-3dd0c918f129-kube-api-access-6sft8\") pod \"test-operator-controller-manager-7866795846-zx56k\" (UID: \"5ffa3c9f-1ef0-43df-b99c-3dd0c918f129\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.497509 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.497548 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.497645 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbfl\" (UniqueName: \"kubernetes.io/projected/bfeed48e-2ac8-4348-8c4c-0e239bd8c568-kube-api-access-6tbfl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xfnm8\" (UID: \"bfeed48e-2ac8-4348-8c4c-0e239bd8c568\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.497672 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxm6f\" (UniqueName: \"kubernetes.io/projected/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-kube-api-access-jxm6f\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.497717 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgjn2\" (UniqueName: \"kubernetes.io/projected/34ecdca8-2927-432e-b770-c0c0d0b750e9-kube-api-access-jgjn2\") pod \"watcher-operator-controller-manager-5db88f68c-xt5f9\" (UID: \"34ecdca8-2927-432e-b770-c0c0d0b750e9\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.498834 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.523232 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sft8\" (UniqueName: \"kubernetes.io/projected/5ffa3c9f-1ef0-43df-b99c-3dd0c918f129-kube-api-access-6sft8\") pod \"test-operator-controller-manager-7866795846-zx56k\" (UID: \"5ffa3c9f-1ef0-43df-b99c-3dd0c918f129\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.532827 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgjn2\" (UniqueName: \"kubernetes.io/projected/34ecdca8-2927-432e-b770-c0c0d0b750e9-kube-api-access-jgjn2\") pod \"watcher-operator-controller-manager-5db88f68c-xt5f9\" (UID: \"34ecdca8-2927-432e-b770-c0c0d0b750e9\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.541943 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.545792 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.572829 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7"] Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.598982 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.599906 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbfl\" (UniqueName: \"kubernetes.io/projected/bfeed48e-2ac8-4348-8c4c-0e239bd8c568-kube-api-access-6tbfl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xfnm8\" (UID: \"bfeed48e-2ac8-4348-8c4c-0e239bd8c568\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.599944 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxm6f\" (UniqueName: \"kubernetes.io/projected/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-kube-api-access-jxm6f\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.600035 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.600062 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.600195 4750 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.600241 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:23.100224877 +0000 UTC m=+1155.126214358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "webhook-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.600396 4750 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.600417 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:23.100411173 +0000 UTC m=+1155.126400654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "metrics-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.620906 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbfl\" (UniqueName: \"kubernetes.io/projected/bfeed48e-2ac8-4348-8c4c-0e239bd8c568-kube-api-access-6tbfl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xfnm8\" (UID: \"bfeed48e-2ac8-4348-8c4c-0e239bd8c568\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.624199 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxm6f\" (UniqueName: \"kubernetes.io/projected/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-kube-api-access-jxm6f\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.679660 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.683998 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.703019 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.774092 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.803712 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.807785 4750 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: E0214 14:11:22.807852 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert podName:64bc51de-7c3f-406e-899b-cbf5339658ea nodeName:}" failed. No retries permitted until 2026-02-14 14:11:23.807835242 +0000 UTC m=+1155.833824713 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" (UID: "64bc51de-7c3f-406e-899b-cbf5339658ea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.959255 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9"] Feb 14 14:11:22 crc kubenswrapper[4750]: W0214 14:11:22.967305 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb559262e_cbcd_486e_8602_ece46ff1ed14.slice/crio-b15fa1ca5149dc82ca790ebd598d6ee8d33a6eeb6ced9ee3fe11f89b99ca2868 WatchSource:0}: Error finding container b15fa1ca5149dc82ca790ebd598d6ee8d33a6eeb6ced9ee3fe11f89b99ca2868: Status 404 returned error can't find the container with id b15fa1ca5149dc82ca790ebd598d6ee8d33a6eeb6ced9ee3fe11f89b99ca2868 Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.969511 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb"] Feb 14 14:11:22 crc kubenswrapper[4750]: W0214 14:11:22.976481 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f7394c_167e_4dda_bd08_b2d2a49d5f13.slice/crio-f2effda1ee89cca7ea4b4154b04013f686d3fed5ff45a891eff45293f4c8c348 WatchSource:0}: Error finding container f2effda1ee89cca7ea4b4154b04013f686d3fed5ff45a891eff45293f4c8c348: Status 404 returned error can't find the container with id f2effda1ee89cca7ea4b4154b04013f686d3fed5ff45a891eff45293f4c8c348 Feb 14 14:11:22 crc kubenswrapper[4750]: W0214 14:11:22.978981 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod253c171a_c8f6_47d7_9490_a91d08ecd980.slice/crio-b9656bb8677a823ae7c22648788ff2d5c3f5bbe46efb0fd5def41c92686d456f WatchSource:0}: Error finding container b9656bb8677a823ae7c22648788ff2d5c3f5bbe46efb0fd5def41c92686d456f: Status 404 returned error can't find the container with id b9656bb8677a823ae7c22648788ff2d5c3f5bbe46efb0fd5def41c92686d456f Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.985754 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc"] Feb 14 14:11:22 crc kubenswrapper[4750]: W0214 14:11:22.991438 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fbd7079_b94a_4632_bfd7_5d550d6cbe1d.slice/crio-819eba59beb8eab0a4d0d62cc9fdd7eb8dc647e80c31281a274a5e4964cdbfab WatchSource:0}: Error finding container 819eba59beb8eab0a4d0d62cc9fdd7eb8dc647e80c31281a274a5e4964cdbfab: Status 404 returned error can't find the container with id 819eba59beb8eab0a4d0d62cc9fdd7eb8dc647e80c31281a274a5e4964cdbfab Feb 14 14:11:22 crc kubenswrapper[4750]: I0214 14:11:22.994657 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx"] Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.109909 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.109959 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.110176 4750 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.110227 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:24.110207079 +0000 UTC m=+1156.136196560 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "webhook-server-cert" not found Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.110545 4750 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.110573 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:24.110565979 +0000 UTC m=+1156.136555460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "metrics-server-cert" not found Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.256396 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" event={"ID":"253c171a-c8f6-47d7-9490-a91d08ecd980","Type":"ContainerStarted","Data":"b9656bb8677a823ae7c22648788ff2d5c3f5bbe46efb0fd5def41c92686d456f"} Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.258539 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" event={"ID":"27f7394c-167e-4dda-bd08-b2d2a49d5f13","Type":"ContainerStarted","Data":"f2effda1ee89cca7ea4b4154b04013f686d3fed5ff45a891eff45293f4c8c348"} Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.262692 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" event={"ID":"8fbd7079-b94a-4632-bfd7-5d550d6cbe1d","Type":"ContainerStarted","Data":"819eba59beb8eab0a4d0d62cc9fdd7eb8dc647e80c31281a274a5e4964cdbfab"} Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.267551 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" event={"ID":"b559262e-cbcd-486e-8602-ece46ff1ed14","Type":"ContainerStarted","Data":"b15fa1ca5149dc82ca790ebd598d6ee8d33a6eeb6ced9ee3fe11f89b99ca2868"} Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.270904 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" event={"ID":"ac57dc96-afbc-4c7c-bd3c-9b763974a1c9","Type":"ContainerStarted","Data":"c1fa6a120cd1d5cd304685b5f144ca25b42b71313bbdfd2f7581a6b5d9118877"} Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.272030 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" event={"ID":"4b1ad62d-48a6-4228-8de2-3710bd15b7f4","Type":"ContainerStarted","Data":"49992be32c9b35f3ec725ece507a6debc03e349b800a0b63560b6c70f7758ee1"} Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.297206 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s"] Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.317292 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf"] Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.419844 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.420077 4750 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.420188 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert podName:6e2542fa-3b0a-4a09-8d18-54037ebbbdf8 nodeName:}" failed. No retries permitted until 2026-02-14 14:11:25.420167654 +0000 UTC m=+1157.446157235 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert") pod "infra-operator-controller-manager-79d975b745-vjpnq" (UID: "6e2542fa-3b0a-4a09-8d18-54037ebbbdf8") : secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.788239 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7"] Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.805043 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc"] Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.813548 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg"] Feb 14 14:11:23 crc kubenswrapper[4750]: W0214 14:11:23.818222 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod782775e0_d41f_4e9d_b6e5_4640a473b64a.slice/crio-4182bb128fb9de5ccb4b24cfabb8883dbe47967d9eed3187006cc61d3517bf95 WatchSource:0}: Error finding container 4182bb128fb9de5ccb4b24cfabb8883dbe47967d9eed3187006cc61d3517bf95: Status 404 returned error can't find the container with id 4182bb128fb9de5ccb4b24cfabb8883dbe47967d9eed3187006cc61d3517bf95 Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.821579 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w"] Feb 14 14:11:23 crc kubenswrapper[4750]: W0214 14:11:23.825014 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e79ecca_328f_4049_945b_a506ba6d56f9.slice/crio-36f3f35c02b6941154916aabe3773fb8a07aa7c0731c2a00d3dbae4e7d499c61 WatchSource:0}: Error finding container 36f3f35c02b6941154916aabe3773fb8a07aa7c0731c2a00d3dbae4e7d499c61: Status 404 returned error can't find the container with id 36f3f35c02b6941154916aabe3773fb8a07aa7c0731c2a00d3dbae4e7d499c61 Feb 14 14:11:23 crc kubenswrapper[4750]: W0214 14:11:23.825352 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413e7f1a_bab4_46b9_b59c_7d3a7cfc1e54.slice/crio-0efcd86a3f5a499c772d580d786038f0d1d642633065c017fb83452d8e78dd48 WatchSource:0}: Error finding container 0efcd86a3f5a499c772d580d786038f0d1d642633065c017fb83452d8e78dd48: Status 404 returned error can't find the container with id 0efcd86a3f5a499c772d580d786038f0d1d642633065c017fb83452d8e78dd48 Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.829162 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.829925 4750 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.829980 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert podName:64bc51de-7c3f-406e-899b-cbf5339658ea nodeName:}" failed. No retries permitted until 2026-02-14 14:11:25.829961467 +0000 UTC m=+1157.855950978 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" (UID: "64bc51de-7c3f-406e-899b-cbf5339658ea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.830254 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf"] Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.840223 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7"] Feb 14 14:11:23 crc kubenswrapper[4750]: W0214 14:11:23.846550 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d3cff8a_de26_4c32_96b3_080e797d527f.slice/crio-44f5e8bce704df658b6bc0e502e2fc6a795202c7e9f78bea9f616af44895f050 WatchSource:0}: Error finding container 44f5e8bce704df658b6bc0e502e2fc6a795202c7e9f78bea9f616af44895f050: Status 404 returned error can't find the container with id 44f5e8bce704df658b6bc0e502e2fc6a795202c7e9f78bea9f616af44895f050 Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.849208 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9"] Feb 14 14:11:23 crc kubenswrapper[4750]: W0214 14:11:23.853759 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfe0761e_63fc_480a_bfe7_c8c3e78d3785.slice/crio-e862fde5f723db8a4bb917233e87d262612fdf76843f8df0d4793d9c8f6fa8ac WatchSource:0}: Error finding container e862fde5f723db8a4bb917233e87d262612fdf76843f8df0d4793d9c8f6fa8ac: Status 404 returned error can't find the container with id e862fde5f723db8a4bb917233e87d262612fdf76843f8df0d4793d9c8f6fa8ac Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.877168 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54"] Feb 14 14:11:23 crc kubenswrapper[4750]: I0214 14:11:23.889950 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn"] Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.903578 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zk58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-s7nsg_openstack-operators(cfe0761e-63fc-480a-bfe7-c8c3e78d3785): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 14:11:23 crc kubenswrapper[4750]: E0214 14:11:23.905770 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" podUID="cfe0761e-63fc-480a-bfe7-c8c3e78d3785" Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.135626 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.135683 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:24 crc kubenswrapper[4750]: E0214 14:11:24.135796 4750 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 14:11:24 crc kubenswrapper[4750]: E0214 14:11:24.135866 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:26.135848614 +0000 UTC m=+1158.161838095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "metrics-server-cert" not found Feb 14 14:11:24 crc kubenswrapper[4750]: E0214 14:11:24.135964 4750 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 14:11:24 crc kubenswrapper[4750]: E0214 14:11:24.136049 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:26.136025799 +0000 UTC m=+1158.162015360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "webhook-server-cert" not found Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.155025 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zx56k"] Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.169363 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9"] Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.185081 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8"] Feb 14 14:11:24 crc kubenswrapper[4750]: W0214 14:11:24.201164 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfeed48e_2ac8_4348_8c4c_0e239bd8c568.slice/crio-ae4d477168bc497f1bb1021139b67ebaa50039c5fde248b7cc7f75b9adbd957f WatchSource:0}: Error finding container ae4d477168bc497f1bb1021139b67ebaa50039c5fde248b7cc7f75b9adbd957f: Status 404 returned error can't find the container with id ae4d477168bc497f1bb1021139b67ebaa50039c5fde248b7cc7f75b9adbd957f Feb 14 14:11:24 crc kubenswrapper[4750]: E0214 14:11:24.203706 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6tbfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xfnm8_openstack-operators(bfeed48e-2ac8-4348-8c4c-0e239bd8c568): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 14:11:24 crc kubenswrapper[4750]: E0214 14:11:24.204860 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" podUID="bfeed48e-2ac8-4348-8c4c-0e239bd8c568" Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.289474 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" event={"ID":"413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54","Type":"ContainerStarted","Data":"0efcd86a3f5a499c772d580d786038f0d1d642633065c017fb83452d8e78dd48"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.296484 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" event={"ID":"cfe0761e-63fc-480a-bfe7-c8c3e78d3785","Type":"ContainerStarted","Data":"e862fde5f723db8a4bb917233e87d262612fdf76843f8df0d4793d9c8f6fa8ac"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.298091 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" event={"ID":"5ffa3c9f-1ef0-43df-b99c-3dd0c918f129","Type":"ContainerStarted","Data":"aa19ae2ffa5b9228e85771dcde71efd6227013afa3a36a8f1a380fdef702fe59"} Feb 14 14:11:24 crc kubenswrapper[4750]: E0214 14:11:24.299295 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" podUID="cfe0761e-63fc-480a-bfe7-c8c3e78d3785" Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.300220 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" event={"ID":"8d3cff8a-de26-4c32-96b3-080e797d527f","Type":"ContainerStarted","Data":"44f5e8bce704df658b6bc0e502e2fc6a795202c7e9f78bea9f616af44895f050"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.303175 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" event={"ID":"ff1e2ca9-56b1-4511-b59b-14256631d65f","Type":"ContainerStarted","Data":"df04b6eee94bb8bc1dd3d60797cc7720ac4e649bebb1ee2e8d758062aefcaf23"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.305067 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" event={"ID":"b146fda3-a4a4-4ebd-9ac0-32016dac7650","Type":"ContainerStarted","Data":"804183f9539e23fc1b3f2ef6851ca6cdfe4b9b5e90656a3edbc537d59bf0edf0"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.308145 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" event={"ID":"60e32c9b-3598-476f-85d8-7cab15748de5","Type":"ContainerStarted","Data":"6cae0ffa2492c0af94c7995b4d6ca2fc7cb282257f0a71d6f4574a36e6cc7703"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.310269 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" event={"ID":"bfeed48e-2ac8-4348-8c4c-0e239bd8c568","Type":"ContainerStarted","Data":"ae4d477168bc497f1bb1021139b67ebaa50039c5fde248b7cc7f75b9adbd957f"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.321318 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" event={"ID":"585cf590-6dcc-49d2-a01f-9d6fa1612328","Type":"ContainerStarted","Data":"88e77268945121f7e0443356cdb1b34c3d29905463cd354c7170914c802e5606"} Feb 14 14:11:24 crc kubenswrapper[4750]: E0214 14:11:24.321614 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" podUID="bfeed48e-2ac8-4348-8c4c-0e239bd8c568" Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.324035 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" event={"ID":"782775e0-d41f-4e9d-b6e5-4640a473b64a","Type":"ContainerStarted","Data":"4182bb128fb9de5ccb4b24cfabb8883dbe47967d9eed3187006cc61d3517bf95"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.327405 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" event={"ID":"3e79ecca-328f-4049-945b-a506ba6d56f9","Type":"ContainerStarted","Data":"36f3f35c02b6941154916aabe3773fb8a07aa7c0731c2a00d3dbae4e7d499c61"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.329051 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" event={"ID":"c04784fa-abc5-4c4c-b891-6d73db5a17e1","Type":"ContainerStarted","Data":"81c462a91feb73b2f12eab619df3f7430bda53bb2de30dbb98f0724aa2a8e0ef"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.330746 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" event={"ID":"34ecdca8-2927-432e-b770-c0c0d0b750e9","Type":"ContainerStarted","Data":"0bdd8821241a632349b6f1e0ea8691ce8e905feaa8092eaecfff9f78dcc1ecc1"} Feb 14 14:11:24 crc kubenswrapper[4750]: I0214 14:11:24.331979 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" event={"ID":"32e5c795-f58a-41c2-8b75-53ef0f77bef8","Type":"ContainerStarted","Data":"ec4ced7afa9c915694aea86b6d560ef11cbb61f2f2e59f5b42bf87854076775f"} Feb 14 14:11:25 crc kubenswrapper[4750]: E0214 14:11:25.344302 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" podUID="cfe0761e-63fc-480a-bfe7-c8c3e78d3785" Feb 14 14:11:25 crc kubenswrapper[4750]: E0214 14:11:25.344575 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" podUID="bfeed48e-2ac8-4348-8c4c-0e239bd8c568" Feb 14 14:11:25 crc kubenswrapper[4750]: I0214 14:11:25.460508 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:25 crc kubenswrapper[4750]: E0214 14:11:25.460720 4750 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:25 crc kubenswrapper[4750]: E0214 14:11:25.460779 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert podName:6e2542fa-3b0a-4a09-8d18-54037ebbbdf8 nodeName:}" failed. No retries permitted until 2026-02-14 14:11:29.460763259 +0000 UTC m=+1161.486752730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert") pod "infra-operator-controller-manager-79d975b745-vjpnq" (UID: "6e2542fa-3b0a-4a09-8d18-54037ebbbdf8") : secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:25 crc kubenswrapper[4750]: I0214 14:11:25.867710 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:25 crc kubenswrapper[4750]: E0214 14:11:25.867877 4750 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:25 crc kubenswrapper[4750]: E0214 14:11:25.867975 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert podName:64bc51de-7c3f-406e-899b-cbf5339658ea nodeName:}" failed. No retries permitted until 2026-02-14 14:11:29.867952048 +0000 UTC m=+1161.893941589 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" (UID: "64bc51de-7c3f-406e-899b-cbf5339658ea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:26 crc kubenswrapper[4750]: I0214 14:11:26.206550 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:26 crc kubenswrapper[4750]: I0214 14:11:26.206602 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:26 crc kubenswrapper[4750]: E0214 14:11:26.206722 4750 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 14:11:26 crc kubenswrapper[4750]: E0214 14:11:26.206723 4750 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 14:11:26 crc kubenswrapper[4750]: E0214 14:11:26.206778 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:30.206764009 +0000 UTC m=+1162.232753480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "webhook-server-cert" not found Feb 14 14:11:26 crc kubenswrapper[4750]: E0214 14:11:26.206793 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:30.206786299 +0000 UTC m=+1162.232775780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "metrics-server-cert" not found Feb 14 14:11:29 crc kubenswrapper[4750]: I0214 14:11:29.470164 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:29 crc kubenswrapper[4750]: E0214 14:11:29.470388 4750 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:29 crc kubenswrapper[4750]: E0214 14:11:29.470988 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert podName:6e2542fa-3b0a-4a09-8d18-54037ebbbdf8 nodeName:}" failed. No retries permitted until 2026-02-14 14:11:37.470958578 +0000 UTC m=+1169.496948079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert") pod "infra-operator-controller-manager-79d975b745-vjpnq" (UID: "6e2542fa-3b0a-4a09-8d18-54037ebbbdf8") : secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:29 crc kubenswrapper[4750]: I0214 14:11:29.878086 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:29 crc kubenswrapper[4750]: E0214 14:11:29.878285 4750 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:29 crc kubenswrapper[4750]: E0214 14:11:29.878389 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert podName:64bc51de-7c3f-406e-899b-cbf5339658ea nodeName:}" failed. No retries permitted until 2026-02-14 14:11:37.878360073 +0000 UTC m=+1169.904349604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" (UID: "64bc51de-7c3f-406e-899b-cbf5339658ea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:30 crc kubenswrapper[4750]: I0214 14:11:30.129141 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:11:30 crc kubenswrapper[4750]: I0214 14:11:30.129526 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:11:30 crc kubenswrapper[4750]: I0214 14:11:30.286438 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:30 crc kubenswrapper[4750]: I0214 14:11:30.286522 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:30 crc kubenswrapper[4750]: E0214 14:11:30.286773 4750 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 14:11:30 crc kubenswrapper[4750]: E0214 14:11:30.286841 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:38.286819718 +0000 UTC m=+1170.312809239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "webhook-server-cert" not found Feb 14 14:11:30 crc kubenswrapper[4750]: E0214 14:11:30.287128 4750 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 14:11:30 crc kubenswrapper[4750]: E0214 14:11:30.287298 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:38.287276601 +0000 UTC m=+1170.313266152 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "metrics-server-cert" not found Feb 14 14:11:37 crc kubenswrapper[4750]: I0214 14:11:37.476862 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:37 crc kubenswrapper[4750]: E0214 14:11:37.477033 4750 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:37 crc kubenswrapper[4750]: E0214 14:11:37.477588 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert podName:6e2542fa-3b0a-4a09-8d18-54037ebbbdf8 nodeName:}" failed. No retries permitted until 2026-02-14 14:11:53.477561308 +0000 UTC m=+1185.503550829 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert") pod "infra-operator-controller-manager-79d975b745-vjpnq" (UID: "6e2542fa-3b0a-4a09-8d18-54037ebbbdf8") : secret "infra-operator-webhook-server-cert" not found Feb 14 14:11:37 crc kubenswrapper[4750]: I0214 14:11:37.889460 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:37 crc kubenswrapper[4750]: E0214 14:11:37.889623 4750 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:37 crc kubenswrapper[4750]: E0214 14:11:37.889930 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert podName:64bc51de-7c3f-406e-899b-cbf5339658ea nodeName:}" failed. No retries permitted until 2026-02-14 14:11:53.889911962 +0000 UTC m=+1185.915901433 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" (UID: "64bc51de-7c3f-406e-899b-cbf5339658ea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 14:11:38 crc kubenswrapper[4750]: I0214 14:11:38.299263 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:38 crc kubenswrapper[4750]: I0214 14:11:38.299335 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:38 crc kubenswrapper[4750]: E0214 14:11:38.299485 4750 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 14:11:38 crc kubenswrapper[4750]: E0214 14:11:38.299621 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:54.299590598 +0000 UTC m=+1186.325580089 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "metrics-server-cert" not found Feb 14 14:11:38 crc kubenswrapper[4750]: E0214 14:11:38.299627 4750 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 14:11:38 crc kubenswrapper[4750]: E0214 14:11:38.299765 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs podName:aba78e62-6759-445f-b8a9-9c8b36cf4a3a nodeName:}" failed. No retries permitted until 2026-02-14 14:11:54.299729862 +0000 UTC m=+1186.325719503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs") pod "openstack-operator-controller-manager-6bd569c557-twg4v" (UID: "aba78e62-6759-445f-b8a9-9c8b36cf4a3a") : secret "webhook-server-cert" not found Feb 14 14:11:41 crc kubenswrapper[4750]: E0214 14:11:41.603600 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 14 14:11:41 crc kubenswrapper[4750]: E0214 14:11:41.604203 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vt86l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-nvfv2_openstack-operators(ac57dc96-afbc-4c7c-bd3c-9b763974a1c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:41 crc kubenswrapper[4750]: E0214 14:11:41.605467 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" podUID="ac57dc96-afbc-4c7c-bd3c-9b763974a1c9" Feb 14 14:11:42 crc kubenswrapper[4750]: E0214 14:11:42.836074 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" podUID="ac57dc96-afbc-4c7c-bd3c-9b763974a1c9" Feb 14 14:11:43 crc kubenswrapper[4750]: E0214 14:11:43.480606 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 14 14:11:43 crc kubenswrapper[4750]: E0214 14:11:43.480794 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2bsc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-87m54_openstack-operators(413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:43 crc kubenswrapper[4750]: E0214 14:11:43.482310 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" podUID="413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54" Feb 14 14:11:43 crc kubenswrapper[4750]: E0214 14:11:43.549713 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" podUID="413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54" Feb 14 14:11:44 crc kubenswrapper[4750]: E0214 14:11:44.099398 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 14 14:11:44 crc kubenswrapper[4750]: E0214 14:11:44.099796 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fcncx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-dnj8w_openstack-operators(60e32c9b-3598-476f-85d8-7cab15748de5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:44 crc kubenswrapper[4750]: E0214 14:11:44.101203 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" podUID="60e32c9b-3598-476f-85d8-7cab15748de5" Feb 14 14:11:44 crc kubenswrapper[4750]: E0214 14:11:44.558756 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" podUID="60e32c9b-3598-476f-85d8-7cab15748de5" Feb 14 14:11:45 crc kubenswrapper[4750]: E0214 14:11:45.962960 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 14 14:11:45 crc kubenswrapper[4750]: E0214 14:11:45.963288 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xd6k2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-92td7_openstack-operators(b146fda3-a4a4-4ebd-9ac0-32016dac7650): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:45 crc kubenswrapper[4750]: E0214 14:11:45.964601 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" podUID="b146fda3-a4a4-4ebd-9ac0-32016dac7650" Feb 14 14:11:46 crc kubenswrapper[4750]: E0214 14:11:46.573312 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" podUID="b146fda3-a4a4-4ebd-9ac0-32016dac7650" Feb 14 14:11:48 crc kubenswrapper[4750]: E0214 14:11:48.445570 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 14 14:11:48 crc kubenswrapper[4750]: E0214 14:11:48.446545 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zq89j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-nzqxf_openstack-operators(585cf590-6dcc-49d2-a01f-9d6fa1612328): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:48 crc kubenswrapper[4750]: E0214 14:11:48.447778 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" podUID="585cf590-6dcc-49d2-a01f-9d6fa1612328" Feb 14 14:11:48 crc kubenswrapper[4750]: E0214 14:11:48.593978 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" podUID="585cf590-6dcc-49d2-a01f-9d6fa1612328" Feb 14 14:11:48 crc kubenswrapper[4750]: E0214 14:11:48.900612 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 14 14:11:48 crc kubenswrapper[4750]: E0214 14:11:48.900847 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6sft8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-zx56k_openstack-operators(5ffa3c9f-1ef0-43df-b99c-3dd0c918f129): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:48 crc kubenswrapper[4750]: E0214 14:11:48.902214 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" podUID="5ffa3c9f-1ef0-43df-b99c-3dd0c918f129" Feb 14 14:11:49 crc kubenswrapper[4750]: E0214 14:11:49.616590 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" podUID="5ffa3c9f-1ef0-43df-b99c-3dd0c918f129" Feb 14 14:11:51 crc kubenswrapper[4750]: E0214 14:11:51.216366 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 14 14:11:51 crc kubenswrapper[4750]: E0214 14:11:51.216580 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-xt5f9_openstack-operators(34ecdca8-2927-432e-b770-c0c0d0b750e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:51 crc kubenswrapper[4750]: E0214 14:11:51.217954 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" podUID="34ecdca8-2927-432e-b770-c0c0d0b750e9" Feb 14 14:11:51 crc kubenswrapper[4750]: E0214 14:11:51.633084 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" podUID="34ecdca8-2927-432e-b770-c0c0d0b750e9" Feb 14 14:11:51 crc kubenswrapper[4750]: E0214 14:11:51.726019 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 14 14:11:51 crc kubenswrapper[4750]: E0214 14:11:51.726278 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sbhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-5dvwn_openstack-operators(32e5c795-f58a-41c2-8b75-53ef0f77bef8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:51 crc kubenswrapper[4750]: E0214 14:11:51.727467 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" podUID="32e5c795-f58a-41c2-8b75-53ef0f77bef8" Feb 14 14:11:52 crc kubenswrapper[4750]: E0214 14:11:52.649145 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" podUID="32e5c795-f58a-41c2-8b75-53ef0f77bef8" Feb 14 14:11:53 crc kubenswrapper[4750]: E0214 14:11:53.216587 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 14 14:11:53 crc kubenswrapper[4750]: E0214 14:11:53.216760 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xfgkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-dq8x7_openstack-operators(782775e0-d41f-4e9d-b6e5-4640a473b64a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:53 crc kubenswrapper[4750]: E0214 14:11:53.218310 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" podUID="782775e0-d41f-4e9d-b6e5-4640a473b64a" Feb 14 14:11:53 crc kubenswrapper[4750]: I0214 14:11:53.499885 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:53 crc kubenswrapper[4750]: I0214 14:11:53.506542 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2542fa-3b0a-4a09-8d18-54037ebbbdf8-cert\") pod \"infra-operator-controller-manager-79d975b745-vjpnq\" (UID: \"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:53 crc kubenswrapper[4750]: E0214 14:11:53.655908 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" podUID="782775e0-d41f-4e9d-b6e5-4640a473b64a" Feb 14 14:11:53 crc kubenswrapper[4750]: I0214 14:11:53.685587 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:11:53 crc kubenswrapper[4750]: I0214 14:11:53.907007 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:53 crc kubenswrapper[4750]: I0214 14:11:53.913397 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64bc51de-7c3f-406e-899b-cbf5339658ea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s\" (UID: \"64bc51de-7c3f-406e-899b-cbf5339658ea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:53 crc kubenswrapper[4750]: I0214 14:11:53.972343 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:11:54 crc kubenswrapper[4750]: I0214 14:11:54.313210 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:54 crc kubenswrapper[4750]: I0214 14:11:54.313265 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:54 crc kubenswrapper[4750]: I0214 14:11:54.317785 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-webhook-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:54 crc kubenswrapper[4750]: I0214 14:11:54.319889 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba78e62-6759-445f-b8a9-9c8b36cf4a3a-metrics-certs\") pod \"openstack-operator-controller-manager-6bd569c557-twg4v\" (UID: \"aba78e62-6759-445f-b8a9-9c8b36cf4a3a\") " pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:54 crc kubenswrapper[4750]: I0214 14:11:54.514383 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:56 crc kubenswrapper[4750]: E0214 14:11:56.996294 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 14 14:11:56 crc kubenswrapper[4750]: E0214 14:11:56.996803 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jt2kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-gnmwc_openstack-operators(3e79ecca-328f-4049-945b-a506ba6d56f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:56 crc kubenswrapper[4750]: E0214 14:11:56.998547 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" podUID="3e79ecca-328f-4049-945b-a506ba6d56f9" Feb 14 14:11:57 crc kubenswrapper[4750]: E0214 14:11:57.499022 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 14 14:11:57 crc kubenswrapper[4750]: E0214 14:11:57.499228 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6tbfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xfnm8_openstack-operators(bfeed48e-2ac8-4348-8c4c-0e239bd8c568): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:11:57 crc kubenswrapper[4750]: E0214 14:11:57.500385 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" podUID="bfeed48e-2ac8-4348-8c4c-0e239bd8c568" Feb 14 14:11:57 crc kubenswrapper[4750]: E0214 14:11:57.725928 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" podUID="3e79ecca-328f-4049-945b-a506ba6d56f9" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.131624 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v"] Feb 14 14:11:58 crc kubenswrapper[4750]: W0214 14:11:58.141955 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaba78e62_6759_445f_b8a9_9c8b36cf4a3a.slice/crio-4812d1d1995040bfedcfef667cad421e8cac6d5f2ea7bd6269d57f4dae99d58d WatchSource:0}: Error finding container 4812d1d1995040bfedcfef667cad421e8cac6d5f2ea7bd6269d57f4dae99d58d: Status 404 returned error can't find the container with id 4812d1d1995040bfedcfef667cad421e8cac6d5f2ea7bd6269d57f4dae99d58d Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.246844 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s"] Feb 14 14:11:58 crc kubenswrapper[4750]: W0214 14:11:58.265771 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64bc51de_7c3f_406e_899b_cbf5339658ea.slice/crio-287af433571a19bb26588670dbd114926052c617898640ba1336e71b261609bc WatchSource:0}: Error finding container 287af433571a19bb26588670dbd114926052c617898640ba1336e71b261609bc: Status 404 returned error can't find the container with id 287af433571a19bb26588670dbd114926052c617898640ba1336e71b261609bc Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.279355 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq"] Feb 14 14:11:58 crc kubenswrapper[4750]: W0214 14:11:58.282261 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e2542fa_3b0a_4a09_8d18_54037ebbbdf8.slice/crio-7604e2b2924aee9b232fa820835f9fde710b7a6c71268f5170a937f3d03cba7b WatchSource:0}: Error finding container 7604e2b2924aee9b232fa820835f9fde710b7a6c71268f5170a937f3d03cba7b: Status 404 returned error can't find the container with id 7604e2b2924aee9b232fa820835f9fde710b7a6c71268f5170a937f3d03cba7b Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.727341 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" event={"ID":"27f7394c-167e-4dda-bd08-b2d2a49d5f13","Type":"ContainerStarted","Data":"ccbdd34b0447faa68531e6f6d18c22e0ad83883011a8ebe93aeaa1484b6d8587"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.727689 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.728966 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" event={"ID":"c04784fa-abc5-4c4c-b891-6d73db5a17e1","Type":"ContainerStarted","Data":"1b6909262bd8a56243c0e512c7b735155c24dbc305df571c68f7c11d353d27c8"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.729349 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.731708 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" event={"ID":"aba78e62-6759-445f-b8a9-9c8b36cf4a3a","Type":"ContainerStarted","Data":"4e1a76adf294eca1e5f9785256d5fe55c06bb72911a210a84a86547edf00cfff"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.731746 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" event={"ID":"aba78e62-6759-445f-b8a9-9c8b36cf4a3a","Type":"ContainerStarted","Data":"4812d1d1995040bfedcfef667cad421e8cac6d5f2ea7bd6269d57f4dae99d58d"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.731780 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.773503 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.773537 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" event={"ID":"8fbd7079-b94a-4632-bfd7-5d550d6cbe1d","Type":"ContainerStarted","Data":"80c1daf7a9d9ecabfecdb08eb380a0f5e35fb2ef806904ca028b441f8e152d86"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.788261 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" event={"ID":"ac57dc96-afbc-4c7c-bd3c-9b763974a1c9","Type":"ContainerStarted","Data":"97e416d66de34d0419afff65da60d1c54065b7f46e9236a2d9a517f07e633a6a"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.788985 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.793934 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" podStartSLOduration=3.832648371 podStartE2EDuration="37.793920562s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:22.989269876 +0000 UTC m=+1155.015259357" lastFinishedPulling="2026-02-14 14:11:56.950542027 +0000 UTC m=+1188.976531548" observedRunningTime="2026-02-14 14:11:58.74393646 +0000 UTC m=+1190.769925941" watchObservedRunningTime="2026-02-14 14:11:58.793920562 +0000 UTC m=+1190.819910043" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.794017 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" podStartSLOduration=36.794013065 podStartE2EDuration="36.794013065s" podCreationTimestamp="2026-02-14 14:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:11:58.790228977 +0000 UTC m=+1190.816218478" watchObservedRunningTime="2026-02-14 14:11:58.794013065 +0000 UTC m=+1190.820002546" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.806767 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" event={"ID":"cfe0761e-63fc-480a-bfe7-c8c3e78d3785","Type":"ContainerStarted","Data":"32d1665f5c2ea93c625319b536ac6b37f28d8788f74ea671d71f1a131a687195"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.807743 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.819790 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" podStartSLOduration=4.175518304 podStartE2EDuration="37.819769108s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.306712305 +0000 UTC m=+1155.332701796" lastFinishedPulling="2026-02-14 14:11:56.950963079 +0000 UTC m=+1188.976952600" observedRunningTime="2026-02-14 14:11:58.815186837 +0000 UTC m=+1190.841176318" watchObservedRunningTime="2026-02-14 14:11:58.819769108 +0000 UTC m=+1190.845758609" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.823937 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" event={"ID":"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8","Type":"ContainerStarted","Data":"7604e2b2924aee9b232fa820835f9fde710b7a6c71268f5170a937f3d03cba7b"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.852393 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" event={"ID":"253c171a-c8f6-47d7-9490-a91d08ecd980","Type":"ContainerStarted","Data":"f26dab58be6193d175a35b6bf8cb22857a936a887621926b2b70c659f3c1cfc2"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.853093 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.854296 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" event={"ID":"64bc51de-7c3f-406e-899b-cbf5339658ea","Type":"ContainerStarted","Data":"287af433571a19bb26588670dbd114926052c617898640ba1336e71b261609bc"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.855830 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" event={"ID":"b559262e-cbcd-486e-8602-ece46ff1ed14","Type":"ContainerStarted","Data":"3f8e5d554fe7036ec1903774de8d1f8f1e76d11bc580a7329399c30abcda1553"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.856671 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.858334 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" event={"ID":"8d3cff8a-de26-4c32-96b3-080e797d527f","Type":"ContainerStarted","Data":"378b750553e34f89dbb306618439ec979ec70fec6e70fa5e001a16b21fb830ab"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.858521 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.859873 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" event={"ID":"4b1ad62d-48a6-4228-8de2-3710bd15b7f4","Type":"ContainerStarted","Data":"4ee5f3d8d691308c80b98a34b3edc07ad2067363c39def0f152b8a0b95e537bf"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.860413 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.904043 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" event={"ID":"ff1e2ca9-56b1-4511-b59b-14256631d65f","Type":"ContainerStarted","Data":"755f621462c490d3da2ccec64b535f67710eeccde2df593a6ed038d67c83c85f"} Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.904869 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.909789 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" podStartSLOduration=4.260767565 podStartE2EDuration="37.909771559s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.903373119 +0000 UTC m=+1155.929362600" lastFinishedPulling="2026-02-14 14:11:57.552377113 +0000 UTC m=+1189.578366594" observedRunningTime="2026-02-14 14:11:58.908370399 +0000 UTC m=+1190.934359880" watchObservedRunningTime="2026-02-14 14:11:58.909771559 +0000 UTC m=+1190.935761040" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.946818 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" podStartSLOduration=2.782639075 podStartE2EDuration="37.946797622s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:22.432829615 +0000 UTC m=+1154.458819096" lastFinishedPulling="2026-02-14 14:11:57.596988162 +0000 UTC m=+1189.622977643" observedRunningTime="2026-02-14 14:11:58.942879941 +0000 UTC m=+1190.968869422" watchObservedRunningTime="2026-02-14 14:11:58.946797622 +0000 UTC m=+1190.972787103" Feb 14 14:11:58 crc kubenswrapper[4750]: I0214 14:11:58.975497 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" podStartSLOduration=4.023480803 podStartE2EDuration="37.975474768s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:22.996331029 +0000 UTC m=+1155.022320510" lastFinishedPulling="2026-02-14 14:11:56.948324954 +0000 UTC m=+1188.974314475" observedRunningTime="2026-02-14 14:11:58.959906455 +0000 UTC m=+1190.985895936" watchObservedRunningTime="2026-02-14 14:11:58.975474768 +0000 UTC m=+1191.001464249" Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.026986 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" podStartSLOduration=9.281985091 podStartE2EDuration="38.026968444s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:22.971386964 +0000 UTC m=+1154.997376445" lastFinishedPulling="2026-02-14 14:11:51.716370287 +0000 UTC m=+1183.742359798" observedRunningTime="2026-02-14 14:11:59.020207431 +0000 UTC m=+1191.046196912" watchObservedRunningTime="2026-02-14 14:11:59.026968444 +0000 UTC m=+1191.052957925" Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.048733 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" podStartSLOduration=4.409288366 podStartE2EDuration="38.048709372s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.310148284 +0000 UTC m=+1155.336137765" lastFinishedPulling="2026-02-14 14:11:56.94956926 +0000 UTC m=+1188.975558771" observedRunningTime="2026-02-14 14:11:59.038035399 +0000 UTC m=+1191.064024880" watchObservedRunningTime="2026-02-14 14:11:59.048709372 +0000 UTC m=+1191.074698853" Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.054883 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" podStartSLOduration=4.086270318 podStartE2EDuration="38.054869318s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:22.981316249 +0000 UTC m=+1155.007305730" lastFinishedPulling="2026-02-14 14:11:56.949915209 +0000 UTC m=+1188.975904730" observedRunningTime="2026-02-14 14:11:59.051231384 +0000 UTC m=+1191.077220875" watchObservedRunningTime="2026-02-14 14:11:59.054869318 +0000 UTC m=+1191.080858799" Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.082133 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" podStartSLOduration=14.769167985 podStartE2EDuration="38.082099162s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:22.591673533 +0000 UTC m=+1154.617663014" lastFinishedPulling="2026-02-14 14:11:45.90460467 +0000 UTC m=+1177.930594191" observedRunningTime="2026-02-14 14:11:59.071423179 +0000 UTC m=+1191.097412660" watchObservedRunningTime="2026-02-14 14:11:59.082099162 +0000 UTC m=+1191.108088633" Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.098916 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" podStartSLOduration=13.508487263 podStartE2EDuration="38.09889733s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.850105164 +0000 UTC m=+1155.876094645" lastFinishedPulling="2026-02-14 14:11:48.440515231 +0000 UTC m=+1180.466504712" observedRunningTime="2026-02-14 14:11:59.091697146 +0000 UTC m=+1191.117686627" watchObservedRunningTime="2026-02-14 14:11:59.09889733 +0000 UTC m=+1191.124886811" Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.920405 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" event={"ID":"413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54","Type":"ContainerStarted","Data":"262bdca1cc6b5c7c6ab8cadda45140e075402c9f6e269e406dfd170686d71b66"} Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.922320 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.937454 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" event={"ID":"60e32c9b-3598-476f-85d8-7cab15748de5","Type":"ContainerStarted","Data":"fd658fb57a52b6a46759d3bed653dc73641f9c973e286ea9ad4861760bd5e4fc"} Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.939437 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" podStartSLOduration=3.542530272 podStartE2EDuration="38.939422167s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.829203295 +0000 UTC m=+1155.855192776" lastFinishedPulling="2026-02-14 14:11:59.22609519 +0000 UTC m=+1191.252084671" observedRunningTime="2026-02-14 14:11:59.937703608 +0000 UTC m=+1191.963693089" watchObservedRunningTime="2026-02-14 14:11:59.939422167 +0000 UTC m=+1191.965411648" Feb 14 14:11:59 crc kubenswrapper[4750]: I0214 14:11:59.958464 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" podStartSLOduration=3.600737706 podStartE2EDuration="38.958447209s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.862289493 +0000 UTC m=+1155.888278974" lastFinishedPulling="2026-02-14 14:11:59.219998986 +0000 UTC m=+1191.245988477" observedRunningTime="2026-02-14 14:11:59.956810212 +0000 UTC m=+1191.982799693" watchObservedRunningTime="2026-02-14 14:11:59.958447209 +0000 UTC m=+1191.984436690" Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.128545 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.128592 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.128634 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.129273 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02bb963338838db28a9a316102bca11f6cc643e0b302795fd6980c83c74ec211"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.129316 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://02bb963338838db28a9a316102bca11f6cc643e0b302795fd6980c83c74ec211" gracePeriod=600 Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.946960 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" event={"ID":"b146fda3-a4a4-4ebd-9ac0-32016dac7650","Type":"ContainerStarted","Data":"8acb1a0a9bd327c6b4c7b08536e54006c2b5a439760dd3093b3eebacd6315b03"} Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.947256 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.949564 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="02bb963338838db28a9a316102bca11f6cc643e0b302795fd6980c83c74ec211" exitCode=0 Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.950010 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"02bb963338838db28a9a316102bca11f6cc643e0b302795fd6980c83c74ec211"} Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.950192 4750 scope.go:117] "RemoveContainer" containerID="4b148be393ce42f2c5c0534a347aaef624d5f007a262f94fbbdd37721a992213" Feb 14 14:12:00 crc kubenswrapper[4750]: I0214 14:12:00.971392 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" podStartSLOduration=3.689414569 podStartE2EDuration="39.971368682s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.864895917 +0000 UTC m=+1155.890885408" lastFinishedPulling="2026-02-14 14:12:00.14685004 +0000 UTC m=+1192.172839521" observedRunningTime="2026-02-14 14:12:00.963431866 +0000 UTC m=+1192.989421347" watchObservedRunningTime="2026-02-14 14:12:00.971368682 +0000 UTC m=+1192.997358163" Feb 14 14:12:02 crc kubenswrapper[4750]: I0214 14:12:02.060347 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5mm9s" Feb 14 14:12:02 crc kubenswrapper[4750]: I0214 14:12:02.444273 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" Feb 14 14:12:02 crc kubenswrapper[4750]: I0214 14:12:02.985865 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" event={"ID":"5ffa3c9f-1ef0-43df-b99c-3dd0c918f129","Type":"ContainerStarted","Data":"b9d26f901da297ea4badf54cab3a7e66b13f5075561aeb011368491b24611191"} Feb 14 14:12:02 crc kubenswrapper[4750]: I0214 14:12:02.987826 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" Feb 14 14:12:02 crc kubenswrapper[4750]: I0214 14:12:02.994578 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" event={"ID":"6e2542fa-3b0a-4a09-8d18-54037ebbbdf8","Type":"ContainerStarted","Data":"3504dcd76ddc070dd41cb0a0bc5eac4843e54a5eb256f138f002fee65ac34fdb"} Feb 14 14:12:02 crc kubenswrapper[4750]: I0214 14:12:02.995086 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:12:03 crc kubenswrapper[4750]: I0214 14:12:03.004453 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" event={"ID":"64bc51de-7c3f-406e-899b-cbf5339658ea","Type":"ContainerStarted","Data":"3319911841b7f444b4ea535ef64d1bc7d02330832af08ce5c5811b1842041914"} Feb 14 14:12:03 crc kubenswrapper[4750]: I0214 14:12:03.005291 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:12:03 crc kubenswrapper[4750]: I0214 14:12:03.011366 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" podStartSLOduration=3.946259873 podStartE2EDuration="42.011349882s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:24.175607433 +0000 UTC m=+1156.201596904" lastFinishedPulling="2026-02-14 14:12:02.240697432 +0000 UTC m=+1194.266686913" observedRunningTime="2026-02-14 14:12:03.007653486 +0000 UTC m=+1195.033642957" watchObservedRunningTime="2026-02-14 14:12:03.011349882 +0000 UTC m=+1195.037339363" Feb 14 14:12:03 crc kubenswrapper[4750]: I0214 14:12:03.011557 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"3911c83e66f85b48593d6e540ca28f5e1b07698970a4746fbe1e53b6a3e79ffd"} Feb 14 14:12:03 crc kubenswrapper[4750]: I0214 14:12:03.039635 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" podStartSLOduration=38.069313029 podStartE2EDuration="42.039618606s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:58.269236512 +0000 UTC m=+1190.295225993" lastFinishedPulling="2026-02-14 14:12:02.239542089 +0000 UTC m=+1194.265531570" observedRunningTime="2026-02-14 14:12:03.03765703 +0000 UTC m=+1195.063646531" watchObservedRunningTime="2026-02-14 14:12:03.039618606 +0000 UTC m=+1195.065608087" Feb 14 14:12:03 crc kubenswrapper[4750]: I0214 14:12:03.057243 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" podStartSLOduration=38.099267381 podStartE2EDuration="42.057221507s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:58.28640755 +0000 UTC m=+1190.312397031" lastFinishedPulling="2026-02-14 14:12:02.244361676 +0000 UTC m=+1194.270351157" observedRunningTime="2026-02-14 14:12:03.054961723 +0000 UTC m=+1195.080951204" watchObservedRunningTime="2026-02-14 14:12:03.057221507 +0000 UTC m=+1195.083210988" Feb 14 14:12:04 crc kubenswrapper[4750]: I0214 14:12:04.526390 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6bd569c557-twg4v" Feb 14 14:12:06 crc kubenswrapper[4750]: I0214 14:12:06.043927 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" event={"ID":"34ecdca8-2927-432e-b770-c0c0d0b750e9","Type":"ContainerStarted","Data":"96c718eb203720138c04e487fa2652410dfff38e622fa4125314fa9d0292268c"} Feb 14 14:12:06 crc kubenswrapper[4750]: I0214 14:12:06.044409 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" Feb 14 14:12:06 crc kubenswrapper[4750]: I0214 14:12:06.045852 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" event={"ID":"585cf590-6dcc-49d2-a01f-9d6fa1612328","Type":"ContainerStarted","Data":"f8240f8e6267bab5a90951b8d69d0ce2f95cf2142cbd59382b22fe1be13058da"} Feb 14 14:12:06 crc kubenswrapper[4750]: I0214 14:12:06.046460 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" Feb 14 14:12:06 crc kubenswrapper[4750]: I0214 14:12:06.063548 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" podStartSLOduration=3.873651237 podStartE2EDuration="45.063525943s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:24.17585336 +0000 UTC m=+1156.201842861" lastFinishedPulling="2026-02-14 14:12:05.365728086 +0000 UTC m=+1197.391717567" observedRunningTime="2026-02-14 14:12:06.060842336 +0000 UTC m=+1198.086831817" watchObservedRunningTime="2026-02-14 14:12:06.063525943 +0000 UTC m=+1198.089515424" Feb 14 14:12:06 crc kubenswrapper[4750]: I0214 14:12:06.086449 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" podStartSLOduration=3.70842515 podStartE2EDuration="45.086433015s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.859504723 +0000 UTC m=+1155.885494204" lastFinishedPulling="2026-02-14 14:12:05.237512578 +0000 UTC m=+1197.263502069" observedRunningTime="2026-02-14 14:12:06.084619483 +0000 UTC m=+1198.110608964" watchObservedRunningTime="2026-02-14 14:12:06.086433015 +0000 UTC m=+1198.112422496" Feb 14 14:12:07 crc kubenswrapper[4750]: I0214 14:12:07.056037 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" event={"ID":"782775e0-d41f-4e9d-b6e5-4640a473b64a","Type":"ContainerStarted","Data":"db4d77d9f6993d1f3d2a5d7c371fc3bc2720002cd393f7b9750ee57edf269416"} Feb 14 14:12:07 crc kubenswrapper[4750]: I0214 14:12:07.081365 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" podStartSLOduration=3.755838711 podStartE2EDuration="46.081337364s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.823884073 +0000 UTC m=+1155.849873554" lastFinishedPulling="2026-02-14 14:12:06.149382726 +0000 UTC m=+1198.175372207" observedRunningTime="2026-02-14 14:12:07.077084673 +0000 UTC m=+1199.103074194" watchObservedRunningTime="2026-02-14 14:12:07.081337364 +0000 UTC m=+1199.107326865" Feb 14 14:12:08 crc kubenswrapper[4750]: I0214 14:12:08.064914 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" event={"ID":"32e5c795-f58a-41c2-8b75-53ef0f77bef8","Type":"ContainerStarted","Data":"c98bd1ed2a7339e88a9bdb820bd734bccf19ac200012b1209e52929ac626fb33"} Feb 14 14:12:08 crc kubenswrapper[4750]: I0214 14:12:08.065472 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" Feb 14 14:12:08 crc kubenswrapper[4750]: I0214 14:12:08.083993 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" podStartSLOduration=3.671745177 podStartE2EDuration="47.083971165s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.866416081 +0000 UTC m=+1155.892405572" lastFinishedPulling="2026-02-14 14:12:07.278642079 +0000 UTC m=+1199.304631560" observedRunningTime="2026-02-14 14:12:08.081306579 +0000 UTC m=+1200.107296070" watchObservedRunningTime="2026-02-14 14:12:08.083971165 +0000 UTC m=+1200.109960656" Feb 14 14:12:08 crc kubenswrapper[4750]: E0214 14:12:08.756992 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" podUID="bfeed48e-2ac8-4348-8c4c-0e239bd8c568" Feb 14 14:12:10 crc kubenswrapper[4750]: I0214 14:12:10.096430 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" event={"ID":"3e79ecca-328f-4049-945b-a506ba6d56f9","Type":"ContainerStarted","Data":"7503a81e00aacf18bb19886b98717815cff179002dc56d25f2a113da1f317528"} Feb 14 14:12:10 crc kubenswrapper[4750]: I0214 14:12:10.096989 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" Feb 14 14:12:10 crc kubenswrapper[4750]: I0214 14:12:10.121844 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" podStartSLOduration=3.733796397 podStartE2EDuration="49.121822174s" podCreationTimestamp="2026-02-14 14:11:21 +0000 UTC" firstStartedPulling="2026-02-14 14:11:23.834786845 +0000 UTC m=+1155.860776326" lastFinishedPulling="2026-02-14 14:12:09.222812622 +0000 UTC m=+1201.248802103" observedRunningTime="2026-02-14 14:12:10.112690684 +0000 UTC m=+1202.138680175" watchObservedRunningTime="2026-02-14 14:12:10.121822174 +0000 UTC m=+1202.147811665" Feb 14 14:12:11 crc kubenswrapper[4750]: I0214 14:12:11.685405 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-77cn7" Feb 14 14:12:11 crc kubenswrapper[4750]: I0214 14:12:11.715986 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nvfv2" Feb 14 14:12:11 crc kubenswrapper[4750]: I0214 14:12:11.803731 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7s6qb" Feb 14 14:12:11 crc kubenswrapper[4750]: I0214 14:12:11.804902 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sh6f9" Feb 14 14:12:11 crc kubenswrapper[4750]: I0214 14:12:11.833713 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wttvx" Feb 14 14:12:11 crc kubenswrapper[4750]: I0214 14:12:11.943226 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lgsxc" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.064925 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bmlpf" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.333136 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.337178 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-dq8x7" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.344244 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-87m54" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.380457 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92td7" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.447183 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dnj8w" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.505508 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-nzqxf" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.546022 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5dvwn" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.602564 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7nsg" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.684798 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6fdcfd45d9-rqdd9" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.694916 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-zx56k" Feb 14 14:12:12 crc kubenswrapper[4750]: I0214 14:12:12.710194 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xt5f9" Feb 14 14:12:13 crc kubenswrapper[4750]: I0214 14:12:13.696324 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vjpnq" Feb 14 14:12:13 crc kubenswrapper[4750]: I0214 14:12:13.987326 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s" Feb 14 14:12:22 crc kubenswrapper[4750]: I0214 14:12:22.316480 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-gnmwc" Feb 14 14:12:25 crc kubenswrapper[4750]: I0214 14:12:25.238281 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" event={"ID":"bfeed48e-2ac8-4348-8c4c-0e239bd8c568","Type":"ContainerStarted","Data":"130db92d657f14ebb63d6f1e501a7041011555a7a630ff456972832bf6ccd9a9"} Feb 14 14:12:25 crc kubenswrapper[4750]: I0214 14:12:25.269036 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xfnm8" podStartSLOduration=3.084294444 podStartE2EDuration="1m3.26899516s" podCreationTimestamp="2026-02-14 14:11:22 +0000 UTC" firstStartedPulling="2026-02-14 14:11:24.203564543 +0000 UTC m=+1156.229554024" lastFinishedPulling="2026-02-14 14:12:24.388265259 +0000 UTC m=+1216.414254740" observedRunningTime="2026-02-14 14:12:25.256840084 +0000 UTC m=+1217.282829565" watchObservedRunningTime="2026-02-14 14:12:25.26899516 +0000 UTC m=+1217.294984641" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.436579 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmlsp"] Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.438924 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.441454 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.441681 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-c6v95" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.441837 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.442016 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.468911 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmlsp"] Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.499164 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qzpnf"] Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.500959 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.502626 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.508402 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qzpnf"] Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.522782 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4d46fb2-6412-40f4-a194-07b8f4869fba-config\") pod \"dnsmasq-dns-675f4bcbfc-xmlsp\" (UID: \"b4d46fb2-6412-40f4-a194-07b8f4869fba\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.522843 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh79b\" (UniqueName: \"kubernetes.io/projected/b4d46fb2-6412-40f4-a194-07b8f4869fba-kube-api-access-fh79b\") pod \"dnsmasq-dns-675f4bcbfc-xmlsp\" (UID: \"b4d46fb2-6412-40f4-a194-07b8f4869fba\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.624566 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnkc\" (UniqueName: \"kubernetes.io/projected/a1faf7a2-1884-456d-b2c1-e21edffa4422-kube-api-access-5vnkc\") pod \"dnsmasq-dns-78dd6ddcc-qzpnf\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.624658 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-config\") pod \"dnsmasq-dns-78dd6ddcc-qzpnf\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.624688 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qzpnf\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.624723 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4d46fb2-6412-40f4-a194-07b8f4869fba-config\") pod \"dnsmasq-dns-675f4bcbfc-xmlsp\" (UID: \"b4d46fb2-6412-40f4-a194-07b8f4869fba\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.624754 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh79b\" (UniqueName: \"kubernetes.io/projected/b4d46fb2-6412-40f4-a194-07b8f4869fba-kube-api-access-fh79b\") pod \"dnsmasq-dns-675f4bcbfc-xmlsp\" (UID: \"b4d46fb2-6412-40f4-a194-07b8f4869fba\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.626019 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4d46fb2-6412-40f4-a194-07b8f4869fba-config\") pod \"dnsmasq-dns-675f4bcbfc-xmlsp\" (UID: \"b4d46fb2-6412-40f4-a194-07b8f4869fba\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.643948 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh79b\" (UniqueName: \"kubernetes.io/projected/b4d46fb2-6412-40f4-a194-07b8f4869fba-kube-api-access-fh79b\") pod \"dnsmasq-dns-675f4bcbfc-xmlsp\" (UID: \"b4d46fb2-6412-40f4-a194-07b8f4869fba\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.725936 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vnkc\" (UniqueName: \"kubernetes.io/projected/a1faf7a2-1884-456d-b2c1-e21edffa4422-kube-api-access-5vnkc\") pod \"dnsmasq-dns-78dd6ddcc-qzpnf\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.726004 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-config\") pod \"dnsmasq-dns-78dd6ddcc-qzpnf\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.726028 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qzpnf\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.726824 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qzpnf\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.726897 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-config\") pod \"dnsmasq-dns-78dd6ddcc-qzpnf\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.742854 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vnkc\" (UniqueName: \"kubernetes.io/projected/a1faf7a2-1884-456d-b2c1-e21edffa4422-kube-api-access-5vnkc\") pod \"dnsmasq-dns-78dd6ddcc-qzpnf\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.760030 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:12:40 crc kubenswrapper[4750]: I0214 14:12:40.820648 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:12:41 crc kubenswrapper[4750]: I0214 14:12:41.392935 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmlsp"] Feb 14 14:12:41 crc kubenswrapper[4750]: I0214 14:12:41.540178 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qzpnf"] Feb 14 14:12:41 crc kubenswrapper[4750]: W0214 14:12:41.543681 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1faf7a2_1884_456d_b2c1_e21edffa4422.slice/crio-d335f1ac3770776002bded594fc5dc318145b0c72ee8ef057ec3c913c16188be WatchSource:0}: Error finding container d335f1ac3770776002bded594fc5dc318145b0c72ee8ef057ec3c913c16188be: Status 404 returned error can't find the container with id d335f1ac3770776002bded594fc5dc318145b0c72ee8ef057ec3c913c16188be Feb 14 14:12:42 crc kubenswrapper[4750]: I0214 14:12:42.382276 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" event={"ID":"b4d46fb2-6412-40f4-a194-07b8f4869fba","Type":"ContainerStarted","Data":"ba73b4cd3b5911f58c482350d0d1142e8f19f82cb06953b64a5cb00d8b348705"} Feb 14 14:12:42 crc kubenswrapper[4750]: I0214 14:12:42.383472 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" event={"ID":"a1faf7a2-1884-456d-b2c1-e21edffa4422","Type":"ContainerStarted","Data":"d335f1ac3770776002bded594fc5dc318145b0c72ee8ef057ec3c913c16188be"} Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.292744 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmlsp"] Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.327308 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zbknn"] Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.328834 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.342575 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zbknn"] Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.402423 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zbknn\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.402506 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-config\") pod \"dnsmasq-dns-666b6646f7-zbknn\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.402562 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-897fr\" (UniqueName: \"kubernetes.io/projected/b18a11b7-f81f-45f3-96c1-79780f287ad2-kube-api-access-897fr\") pod \"dnsmasq-dns-666b6646f7-zbknn\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.504070 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zbknn\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.504165 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-config\") pod \"dnsmasq-dns-666b6646f7-zbknn\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.504216 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-897fr\" (UniqueName: \"kubernetes.io/projected/b18a11b7-f81f-45f3-96c1-79780f287ad2-kube-api-access-897fr\") pod \"dnsmasq-dns-666b6646f7-zbknn\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.505238 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zbknn\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.505251 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-config\") pod \"dnsmasq-dns-666b6646f7-zbknn\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.526210 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-897fr\" (UniqueName: \"kubernetes.io/projected/b18a11b7-f81f-45f3-96c1-79780f287ad2-kube-api-access-897fr\") pod \"dnsmasq-dns-666b6646f7-zbknn\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.633744 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qzpnf"] Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.653731 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5gtmt"] Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.655601 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.660545 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.685335 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5gtmt"] Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.809211 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5gtmt\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.809598 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dp5b\" (UniqueName: \"kubernetes.io/projected/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-kube-api-access-4dp5b\") pod \"dnsmasq-dns-57d769cc4f-5gtmt\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.809617 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-config\") pod \"dnsmasq-dns-57d769cc4f-5gtmt\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.911008 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dp5b\" (UniqueName: \"kubernetes.io/projected/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-kube-api-access-4dp5b\") pod \"dnsmasq-dns-57d769cc4f-5gtmt\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.911052 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-config\") pod \"dnsmasq-dns-57d769cc4f-5gtmt\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.911156 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5gtmt\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.911908 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5gtmt\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.913040 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-config\") pod \"dnsmasq-dns-57d769cc4f-5gtmt\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.964944 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dp5b\" (UniqueName: \"kubernetes.io/projected/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-kube-api-access-4dp5b\") pod \"dnsmasq-dns-57d769cc4f-5gtmt\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:43 crc kubenswrapper[4750]: I0214 14:12:43.999477 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.378876 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zbknn"] Feb 14 14:12:44 crc kubenswrapper[4750]: W0214 14:12:44.404376 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb18a11b7_f81f_45f3_96c1_79780f287ad2.slice/crio-292dc4f7fe4fbecd5f098b675aab5468af882a93e55f9b95601b3ad309b8616a WatchSource:0}: Error finding container 292dc4f7fe4fbecd5f098b675aab5468af882a93e55f9b95601b3ad309b8616a: Status 404 returned error can't find the container with id 292dc4f7fe4fbecd5f098b675aab5468af882a93e55f9b95601b3ad309b8616a Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.492623 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.494314 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.497047 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.497332 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.497388 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.497332 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fxj2h" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.498072 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.499526 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.502059 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.513106 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.533499 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.535084 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.562842 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.586564 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.591867 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.600828 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 14:12:44 crc kubenswrapper[4750]: W0214 14:12:44.604241 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd44aab2e_5a28_47fc_8598_d3b6f0d0729c.slice/crio-0dcdc3c95841ef8367c973579ef130059903b1eb84b4273204ebd5c9177c2c0b WatchSource:0}: Error finding container 0dcdc3c95841ef8367c973579ef130059903b1eb84b4273204ebd5c9177c2c0b: Status 404 returned error can't find the container with id 0dcdc3c95841ef8367c973579ef130059903b1eb84b4273204ebd5c9177c2c0b Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628203 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628247 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628272 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628287 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628306 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8775619e-e8b9-4bee-987c-2cba92f6fcf3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628327 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-server-conf\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628344 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-config-data\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628365 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb0d3b08-53c1-4396-9413-bf581fad715a-pod-info\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628396 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52v6s\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-kube-api-access-52v6s\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628420 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628440 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628465 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628482 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628511 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb0d3b08-53c1-4396-9413-bf581fad715a-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628528 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628549 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-config-data\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628565 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628597 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628614 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628635 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628654 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn9lh\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-kube-api-access-sn9lh\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.628678 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8775619e-e8b9-4bee-987c-2cba92f6fcf3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.640509 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5gtmt"] Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730272 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52v6s\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-kube-api-access-52v6s\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730324 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730351 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730375 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730428 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-config-data\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730453 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730469 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730496 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730513 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb0d3b08-53c1-4396-9413-bf581fad715a-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730530 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730550 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f513b3ee-aa21-48f3-b5fa-395f0557292a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730569 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-config-data\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730587 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730611 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730637 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730658 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730674 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730700 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730719 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn9lh\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-kube-api-access-sn9lh\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730735 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730763 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8775619e-e8b9-4bee-987c-2cba92f6fcf3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730786 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5cpq\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-kube-api-access-l5cpq\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730810 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730833 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730850 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730873 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730889 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730905 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730921 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8775619e-e8b9-4bee-987c-2cba92f6fcf3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730940 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-server-conf\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730955 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-config-data\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730972 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f513b3ee-aa21-48f3-b5fa-395f0557292a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.730991 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb0d3b08-53c1-4396-9413-bf581fad715a-pod-info\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.752086 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-config-data\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.752227 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.771068 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.784549 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.787237 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb0d3b08-53c1-4396-9413-bf581fad715a-pod-info\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.793040 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.811203 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.813765 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-server-conf\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.814146 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb0d3b08-53c1-4396-9413-bf581fad715a-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.820966 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.821035 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8bb78f0012df4d86c020d4d63ca325a5a4ecea0fafe79b486510d70a930ce40b/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.822031 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.822093 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f2ac3bba23e66408833a5e1a27b12337d51f2915a82f5db59eec5106e13b37e5/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.823171 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.825022 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-config-data\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.828635 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.831519 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn9lh\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-kube-api-access-sn9lh\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.832506 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.832554 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-config-data\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.832637 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.832705 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f513b3ee-aa21-48f3-b5fa-395f0557292a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.832747 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.832781 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.832804 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.832866 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5cpq\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-kube-api-access-l5cpq\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.832967 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.833004 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.833045 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f513b3ee-aa21-48f3-b5fa-395f0557292a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.833460 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.838841 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.839427 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.840368 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.840615 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.840817 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.840936 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52v6s\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-kube-api-access-52v6s\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.844135 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f513b3ee-aa21-48f3-b5fa-395f0557292a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.844337 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.844966 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8775619e-e8b9-4bee-987c-2cba92f6fcf3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.844981 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8775619e-e8b9-4bee-987c-2cba92f6fcf3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.845264 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-config-data\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.851314 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.851347 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d03b6670b43bc96c046bc6da8196f0e4b43abc0290cf3871fba8c6066ca9668/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.858382 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.863419 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.863683 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f513b3ee-aa21-48f3-b5fa-395f0557292a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.868987 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.872702 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5cpq\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-kube-api-access-l5cpq\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.887417 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.890646 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.890847 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.895626 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.895884 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.896156 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.896194 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pmvlc" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.896286 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.896352 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.896164 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.915979 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"rabbitmq-server-1\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " pod="openstack/rabbitmq-server-1" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.934936 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935042 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935090 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935199 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ccd0d36-6fed-4aeb-b811-28cf48001750-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935235 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ccd0d36-6fed-4aeb-b811-28cf48001750-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935263 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935296 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935323 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935368 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9jv\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-kube-api-access-wj9jv\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935425 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.935451 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.936371 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") pod \"rabbitmq-server-2\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.938125 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"rabbitmq-server-0\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " pod="openstack/rabbitmq-server-0" Feb 14 14:12:44 crc kubenswrapper[4750]: I0214 14:12:44.959108 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037180 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037253 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037315 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9jv\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-kube-api-access-wj9jv\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037349 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037390 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037425 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037480 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037508 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037561 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ccd0d36-6fed-4aeb-b811-28cf48001750-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037580 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ccd0d36-6fed-4aeb-b811-28cf48001750-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037599 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.037883 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.038467 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.039258 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.039260 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.039381 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.039613 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.039632 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4154686648465faa147d42c607d238319e8043b2782d62a53f550aeb50ce6488/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.042131 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.043064 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ccd0d36-6fed-4aeb-b811-28cf48001750-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.044197 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ccd0d36-6fed-4aeb-b811-28cf48001750-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.044521 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.056166 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9jv\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-kube-api-access-wj9jv\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.094212 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"rabbitmq-cell1-server-0\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.170709 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.214242 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.230380 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.472790 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 14:12:45 crc kubenswrapper[4750]: W0214 14:12:45.485781 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf513b3ee_aa21_48f3_b5fa_395f0557292a.slice/crio-a8a073ff2e97b012e16dcd00d36aefb30e8feced59211c0ea5a024d830662efb WatchSource:0}: Error finding container a8a073ff2e97b012e16dcd00d36aefb30e8feced59211c0ea5a024d830662efb: Status 404 returned error can't find the container with id a8a073ff2e97b012e16dcd00d36aefb30e8feced59211c0ea5a024d830662efb Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.513356 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" event={"ID":"b18a11b7-f81f-45f3-96c1-79780f287ad2","Type":"ContainerStarted","Data":"292dc4f7fe4fbecd5f098b675aab5468af882a93e55f9b95601b3ad309b8616a"} Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.514970 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" event={"ID":"d44aab2e-5a28-47fc-8598-d3b6f0d0729c","Type":"ContainerStarted","Data":"0dcdc3c95841ef8367c973579ef130059903b1eb84b4273204ebd5c9177c2c0b"} Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.665264 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.866567 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.911203 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.913590 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.915663 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7cls7" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.915825 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.916132 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.916444 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.921896 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.924377 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.964013 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e9776fb-2263-407b-93a2-3f27f9e0635f-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.964071 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e9776fb-2263-407b-93a2-3f27f9e0635f-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.964098 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e9776fb-2263-407b-93a2-3f27f9e0635f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.964137 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e9776fb-2263-407b-93a2-3f27f9e0635f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.964178 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-07669f0c-9470-44d6-a192-d81016ea6554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07669f0c-9470-44d6-a192-d81016ea6554\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.964211 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e9776fb-2263-407b-93a2-3f27f9e0635f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.964233 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czkj6\" (UniqueName: \"kubernetes.io/projected/8e9776fb-2263-407b-93a2-3f27f9e0635f-kube-api-access-czkj6\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:45 crc kubenswrapper[4750]: I0214 14:12:45.964254 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9776fb-2263-407b-93a2-3f27f9e0635f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.038095 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.071084 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czkj6\" (UniqueName: \"kubernetes.io/projected/8e9776fb-2263-407b-93a2-3f27f9e0635f-kube-api-access-czkj6\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.071146 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9776fb-2263-407b-93a2-3f27f9e0635f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.071274 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e9776fb-2263-407b-93a2-3f27f9e0635f-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.071331 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e9776fb-2263-407b-93a2-3f27f9e0635f-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.071350 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e9776fb-2263-407b-93a2-3f27f9e0635f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.071368 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e9776fb-2263-407b-93a2-3f27f9e0635f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.071395 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-07669f0c-9470-44d6-a192-d81016ea6554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07669f0c-9470-44d6-a192-d81016ea6554\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.071422 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e9776fb-2263-407b-93a2-3f27f9e0635f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.071720 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e9776fb-2263-407b-93a2-3f27f9e0635f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.072392 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e9776fb-2263-407b-93a2-3f27f9e0635f-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.072794 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e9776fb-2263-407b-93a2-3f27f9e0635f-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.072960 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e9776fb-2263-407b-93a2-3f27f9e0635f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.078828 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.078866 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-07669f0c-9470-44d6-a192-d81016ea6554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07669f0c-9470-44d6-a192-d81016ea6554\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2634c1d3d6d476627c4f549939fa32ce7e8b1441f3414a2008e99a6627900f7c/globalmount\"" pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.081683 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e9776fb-2263-407b-93a2-3f27f9e0635f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.082352 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9776fb-2263-407b-93a2-3f27f9e0635f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.111813 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czkj6\" (UniqueName: \"kubernetes.io/projected/8e9776fb-2263-407b-93a2-3f27f9e0635f-kube-api-access-czkj6\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.139578 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-07669f0c-9470-44d6-a192-d81016ea6554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07669f0c-9470-44d6-a192-d81016ea6554\") pod \"openstack-galera-0\" (UID: \"8e9776fb-2263-407b-93a2-3f27f9e0635f\") " pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.259595 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.550518 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8775619e-e8b9-4bee-987c-2cba92f6fcf3","Type":"ContainerStarted","Data":"5117d92e1d047bcc4d22aa2f30ad4449f0529bfd2a25ceba69c64a61846f6494"} Feb 14 14:12:46 crc kubenswrapper[4750]: I0214 14:12:46.624839 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f513b3ee-aa21-48f3-b5fa-395f0557292a","Type":"ContainerStarted","Data":"a8a073ff2e97b012e16dcd00d36aefb30e8feced59211c0ea5a024d830662efb"} Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.050276 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.051838 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.054068 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.054417 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.054656 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.055442 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pvpvf" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.114502 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.201220 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42db0a00-1aa6-4754-840c-f93a2b927858-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.201283 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhpc8\" (UniqueName: \"kubernetes.io/projected/42db0a00-1aa6-4754-840c-f93a2b927858-kube-api-access-bhpc8\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.201358 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/42db0a00-1aa6-4754-840c-f93a2b927858-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.201416 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8f5660ec-cb75-4faa-9348-345c40ad3b6e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f5660ec-cb75-4faa-9348-345c40ad3b6e\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.201473 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42db0a00-1aa6-4754-840c-f93a2b927858-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.201533 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/42db0a00-1aa6-4754-840c-f93a2b927858-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.201617 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/42db0a00-1aa6-4754-840c-f93a2b927858-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.201650 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/42db0a00-1aa6-4754-840c-f93a2b927858-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.303557 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/42db0a00-1aa6-4754-840c-f93a2b927858-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.303680 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/42db0a00-1aa6-4754-840c-f93a2b927858-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.303764 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/42db0a00-1aa6-4754-840c-f93a2b927858-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.303798 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42db0a00-1aa6-4754-840c-f93a2b927858-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.303830 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhpc8\" (UniqueName: \"kubernetes.io/projected/42db0a00-1aa6-4754-840c-f93a2b927858-kube-api-access-bhpc8\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.303859 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/42db0a00-1aa6-4754-840c-f93a2b927858-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.303929 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8f5660ec-cb75-4faa-9348-345c40ad3b6e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f5660ec-cb75-4faa-9348-345c40ad3b6e\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.303983 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42db0a00-1aa6-4754-840c-f93a2b927858-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.305727 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42db0a00-1aa6-4754-840c-f93a2b927858-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.306255 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/42db0a00-1aa6-4754-840c-f93a2b927858-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.307689 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/42db0a00-1aa6-4754-840c-f93a2b927858-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.309068 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/42db0a00-1aa6-4754-840c-f93a2b927858-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.312089 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42db0a00-1aa6-4754-840c-f93a2b927858-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.315951 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.316002 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8f5660ec-cb75-4faa-9348-345c40ad3b6e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f5660ec-cb75-4faa-9348-345c40ad3b6e\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5c391851c7947547dcf87331d2c69c4b66bc51e3620ea8a7e17155b36711a063/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.333343 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhpc8\" (UniqueName: \"kubernetes.io/projected/42db0a00-1aa6-4754-840c-f93a2b927858-kube-api-access-bhpc8\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.347070 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/42db0a00-1aa6-4754-840c-f93a2b927858-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.355572 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8f5660ec-cb75-4faa-9348-345c40ad3b6e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f5660ec-cb75-4faa-9348-345c40ad3b6e\") pod \"openstack-cell1-galera-0\" (UID: \"42db0a00-1aa6-4754-840c-f93a2b927858\") " pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.419032 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.452280 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.460380 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.465919 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.469698 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-95gt4" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.469900 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.470036 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.610222 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2136d6a8-25e9-4eff-946e-bbc49dab0b04-kolla-config\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.610295 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2136d6a8-25e9-4eff-946e-bbc49dab0b04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.610654 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2136d6a8-25e9-4eff-946e-bbc49dab0b04-config-data\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.610728 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2136d6a8-25e9-4eff-946e-bbc49dab0b04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.613566 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fngwf\" (UniqueName: \"kubernetes.io/projected/2136d6a8-25e9-4eff-946e-bbc49dab0b04-kube-api-access-fngwf\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.715948 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2136d6a8-25e9-4eff-946e-bbc49dab0b04-config-data\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.716019 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2136d6a8-25e9-4eff-946e-bbc49dab0b04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.716094 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fngwf\" (UniqueName: \"kubernetes.io/projected/2136d6a8-25e9-4eff-946e-bbc49dab0b04-kube-api-access-fngwf\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.716300 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2136d6a8-25e9-4eff-946e-bbc49dab0b04-kolla-config\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.716884 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2136d6a8-25e9-4eff-946e-bbc49dab0b04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.716916 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2136d6a8-25e9-4eff-946e-bbc49dab0b04-config-data\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.717459 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2136d6a8-25e9-4eff-946e-bbc49dab0b04-kolla-config\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.753729 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2136d6a8-25e9-4eff-946e-bbc49dab0b04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.754093 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2136d6a8-25e9-4eff-946e-bbc49dab0b04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.757306 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fngwf\" (UniqueName: \"kubernetes.io/projected/2136d6a8-25e9-4eff-946e-bbc49dab0b04-kube-api-access-fngwf\") pod \"memcached-0\" (UID: \"2136d6a8-25e9-4eff-946e-bbc49dab0b04\") " pod="openstack/memcached-0" Feb 14 14:12:47 crc kubenswrapper[4750]: I0214 14:12:47.788164 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 14:12:49 crc kubenswrapper[4750]: I0214 14:12:49.852601 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 14:12:49 crc kubenswrapper[4750]: I0214 14:12:49.854436 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 14:12:49 crc kubenswrapper[4750]: I0214 14:12:49.858319 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hkrj2" Feb 14 14:12:49 crc kubenswrapper[4750]: I0214 14:12:49.883149 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 14:12:49 crc kubenswrapper[4750]: I0214 14:12:49.983470 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvdx\" (UniqueName: \"kubernetes.io/projected/046d0778-1fd6-4cfa-b632-4379f20af2b7-kube-api-access-7xvdx\") pod \"kube-state-metrics-0\" (UID: \"046d0778-1fd6-4cfa-b632-4379f20af2b7\") " pod="openstack/kube-state-metrics-0" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.088550 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xvdx\" (UniqueName: \"kubernetes.io/projected/046d0778-1fd6-4cfa-b632-4379f20af2b7-kube-api-access-7xvdx\") pod \"kube-state-metrics-0\" (UID: \"046d0778-1fd6-4cfa-b632-4379f20af2b7\") " pod="openstack/kube-state-metrics-0" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.114413 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xvdx\" (UniqueName: \"kubernetes.io/projected/046d0778-1fd6-4cfa-b632-4379f20af2b7-kube-api-access-7xvdx\") pod \"kube-state-metrics-0\" (UID: \"046d0778-1fd6-4cfa-b632-4379f20af2b7\") " pod="openstack/kube-state-metrics-0" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.190470 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.539709 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt"] Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.540933 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.548356 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.548598 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-nsqq6" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.556741 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt"] Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.699568 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95744c6b-6feb-4934-b1b6-6d73a3c17ad0-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-pdpkt\" (UID: \"95744c6b-6feb-4934-b1b6-6d73a3c17ad0\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.699660 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcl6\" (UniqueName: \"kubernetes.io/projected/95744c6b-6feb-4934-b1b6-6d73a3c17ad0-kube-api-access-xhcl6\") pod \"observability-ui-dashboards-66cbf594b5-pdpkt\" (UID: \"95744c6b-6feb-4934-b1b6-6d73a3c17ad0\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.801334 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95744c6b-6feb-4934-b1b6-6d73a3c17ad0-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-pdpkt\" (UID: \"95744c6b-6feb-4934-b1b6-6d73a3c17ad0\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.801428 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcl6\" (UniqueName: \"kubernetes.io/projected/95744c6b-6feb-4934-b1b6-6d73a3c17ad0-kube-api-access-xhcl6\") pod \"observability-ui-dashboards-66cbf594b5-pdpkt\" (UID: \"95744c6b-6feb-4934-b1b6-6d73a3c17ad0\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" Feb 14 14:12:50 crc kubenswrapper[4750]: E0214 14:12:50.801577 4750 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 14 14:12:50 crc kubenswrapper[4750]: E0214 14:12:50.801665 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95744c6b-6feb-4934-b1b6-6d73a3c17ad0-serving-cert podName:95744c6b-6feb-4934-b1b6-6d73a3c17ad0 nodeName:}" failed. No retries permitted until 2026-02-14 14:12:51.301645905 +0000 UTC m=+1243.327635386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/95744c6b-6feb-4934-b1b6-6d73a3c17ad0-serving-cert") pod "observability-ui-dashboards-66cbf594b5-pdpkt" (UID: "95744c6b-6feb-4934-b1b6-6d73a3c17ad0") : secret "observability-ui-dashboards" not found Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.836549 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcl6\" (UniqueName: \"kubernetes.io/projected/95744c6b-6feb-4934-b1b6-6d73a3c17ad0-kube-api-access-xhcl6\") pod \"observability-ui-dashboards-66cbf594b5-pdpkt\" (UID: \"95744c6b-6feb-4934-b1b6-6d73a3c17ad0\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.945867 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cc4f8f495-k9w4c"] Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.949681 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:50 crc kubenswrapper[4750]: I0214 14:12:50.991857 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cc4f8f495-k9w4c"] Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.032461 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.035625 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.040167 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.049376 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.049600 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.049794 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-f7td5" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.049905 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.050000 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.050767 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.050821 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.056446 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.107602 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7db9ddf8-5e65-4bc6-977c-25722237461f-console-serving-cert\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.107648 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-trusted-ca-bundle\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.107707 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-service-ca\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.107755 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7db9ddf8-5e65-4bc6-977c-25722237461f-console-oauth-config\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.107779 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctvjv\" (UniqueName: \"kubernetes.io/projected/7db9ddf8-5e65-4bc6-977c-25722237461f-kube-api-access-ctvjv\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.107800 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-oauth-serving-cert\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.107817 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-console-config\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209229 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209312 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-service-ca\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209345 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209376 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzbx\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-kube-api-access-7gzbx\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209409 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209441 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209471 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209494 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7db9ddf8-5e65-4bc6-977c-25722237461f-console-oauth-config\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209524 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctvjv\" (UniqueName: \"kubernetes.io/projected/7db9ddf8-5e65-4bc6-977c-25722237461f-kube-api-access-ctvjv\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209557 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-oauth-serving-cert\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209583 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209606 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-console-config\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209632 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/012f3d60-0015-4f72-b414-8eb4f633f0f3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209663 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209753 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7db9ddf8-5e65-4bc6-977c-25722237461f-console-serving-cert\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209789 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-trusted-ca-bundle\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.209820 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-config\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.211020 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-oauth-serving-cert\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.211484 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-console-config\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.213002 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-trusted-ca-bundle\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.213476 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7db9ddf8-5e65-4bc6-977c-25722237461f-service-ca\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.215949 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7db9ddf8-5e65-4bc6-977c-25722237461f-console-serving-cert\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.229393 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctvjv\" (UniqueName: \"kubernetes.io/projected/7db9ddf8-5e65-4bc6-977c-25722237461f-kube-api-access-ctvjv\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.233538 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7db9ddf8-5e65-4bc6-977c-25722237461f-console-oauth-config\") pod \"console-7cc4f8f495-k9w4c\" (UID: \"7db9ddf8-5e65-4bc6-977c-25722237461f\") " pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.282031 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.311852 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.311917 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.312092 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.312285 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.312328 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/012f3d60-0015-4f72-b414-8eb4f633f0f3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.312374 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.312462 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95744c6b-6feb-4934-b1b6-6d73a3c17ad0-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-pdpkt\" (UID: \"95744c6b-6feb-4934-b1b6-6d73a3c17ad0\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.312596 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-config\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.312662 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.312792 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.312826 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzbx\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-kube-api-access-7gzbx\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.314415 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.314753 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.314787 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.316365 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-config\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.316619 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95744c6b-6feb-4934-b1b6-6d73a3c17ad0-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-pdpkt\" (UID: \"95744c6b-6feb-4934-b1b6-6d73a3c17ad0\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.318003 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.318663 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/12ac80474dd276c57c1627a564c6c6310977fcdefa749db84ccb04d187d2f877/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.318777 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.319722 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.319758 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.320978 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/012f3d60-0015-4f72-b414-8eb4f633f0f3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.331791 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzbx\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-kube-api-access-7gzbx\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.360576 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") pod \"prometheus-metric-storage-0\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.372685 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 14:12:51 crc kubenswrapper[4750]: I0214 14:12:51.466500 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.401903 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4dn6h"] Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.403276 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.406353 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5jvd5" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.406527 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.406596 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.409535 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4dn6h"] Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.483450 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bpd75"] Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.485617 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.515279 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bpd75"] Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.534162 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/761260d8-59af-48eb-bb26-aa7523be2d9d-scripts\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.534226 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wnh\" (UniqueName: \"kubernetes.io/projected/761260d8-59af-48eb-bb26-aa7523be2d9d-kube-api-access-c7wnh\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.534480 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761260d8-59af-48eb-bb26-aa7523be2d9d-combined-ca-bundle\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.534550 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/761260d8-59af-48eb-bb26-aa7523be2d9d-var-log-ovn\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.534572 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/761260d8-59af-48eb-bb26-aa7523be2d9d-var-run\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.534588 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/761260d8-59af-48eb-bb26-aa7523be2d9d-ovn-controller-tls-certs\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.534653 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/761260d8-59af-48eb-bb26-aa7523be2d9d-var-run-ovn\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636332 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7wnh\" (UniqueName: \"kubernetes.io/projected/761260d8-59af-48eb-bb26-aa7523be2d9d-kube-api-access-c7wnh\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636407 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-scripts\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636467 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761260d8-59af-48eb-bb26-aa7523be2d9d-combined-ca-bundle\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636490 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-var-run\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636515 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-var-log\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636530 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/761260d8-59af-48eb-bb26-aa7523be2d9d-var-log-ovn\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636552 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/761260d8-59af-48eb-bb26-aa7523be2d9d-var-run\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636568 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/761260d8-59af-48eb-bb26-aa7523be2d9d-ovn-controller-tls-certs\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636592 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-etc-ovs\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636617 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/761260d8-59af-48eb-bb26-aa7523be2d9d-var-run-ovn\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636649 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf686\" (UniqueName: \"kubernetes.io/projected/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-kube-api-access-vf686\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636691 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-var-lib\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.636707 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/761260d8-59af-48eb-bb26-aa7523be2d9d-scripts\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.637156 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/761260d8-59af-48eb-bb26-aa7523be2d9d-var-log-ovn\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.637219 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/761260d8-59af-48eb-bb26-aa7523be2d9d-var-run\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.637445 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/761260d8-59af-48eb-bb26-aa7523be2d9d-var-run-ovn\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.638629 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/761260d8-59af-48eb-bb26-aa7523be2d9d-scripts\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.645724 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761260d8-59af-48eb-bb26-aa7523be2d9d-combined-ca-bundle\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.646105 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/761260d8-59af-48eb-bb26-aa7523be2d9d-ovn-controller-tls-certs\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.656594 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7wnh\" (UniqueName: \"kubernetes.io/projected/761260d8-59af-48eb-bb26-aa7523be2d9d-kube-api-access-c7wnh\") pod \"ovn-controller-4dn6h\" (UID: \"761260d8-59af-48eb-bb26-aa7523be2d9d\") " pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.738835 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf686\" (UniqueName: \"kubernetes.io/projected/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-kube-api-access-vf686\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.739330 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-var-lib\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.739600 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-var-lib\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.739732 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-scripts\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.739845 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-var-run\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.739898 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-var-log\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.739947 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-etc-ovs\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.740163 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-etc-ovs\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.740222 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-var-run\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.740736 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-var-log\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.741896 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.750678 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-scripts\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.770053 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf686\" (UniqueName: \"kubernetes.io/projected/bc6ad499-faf9-47ce-8df2-57c77fb7e2b5-kube-api-access-vf686\") pod \"ovn-controller-ovs-bpd75\" (UID: \"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5\") " pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.822255 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.961027 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.963084 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.968653 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.968888 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.968956 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.969190 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.969200 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-jsq7d" Feb 14 14:12:52 crc kubenswrapper[4750]: I0214 14:12:52.975528 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.045191 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd41b510-5787-4c7e-9e0b-22301cd49f54-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.045275 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ac64437-f09b-4db9-b12a-b139fb4f6e38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ac64437-f09b-4db9-b12a-b139fb4f6e38\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.045312 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k72t4\" (UniqueName: \"kubernetes.io/projected/cd41b510-5787-4c7e-9e0b-22301cd49f54-kube-api-access-k72t4\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.045374 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd41b510-5787-4c7e-9e0b-22301cd49f54-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.045414 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd41b510-5787-4c7e-9e0b-22301cd49f54-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.045438 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd41b510-5787-4c7e-9e0b-22301cd49f54-config\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.045493 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd41b510-5787-4c7e-9e0b-22301cd49f54-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.045608 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd41b510-5787-4c7e-9e0b-22301cd49f54-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.147621 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd41b510-5787-4c7e-9e0b-22301cd49f54-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.147704 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7ac64437-f09b-4db9-b12a-b139fb4f6e38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ac64437-f09b-4db9-b12a-b139fb4f6e38\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.147746 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k72t4\" (UniqueName: \"kubernetes.io/projected/cd41b510-5787-4c7e-9e0b-22301cd49f54-kube-api-access-k72t4\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.147812 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd41b510-5787-4c7e-9e0b-22301cd49f54-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.147860 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd41b510-5787-4c7e-9e0b-22301cd49f54-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.147882 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd41b510-5787-4c7e-9e0b-22301cd49f54-config\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.147944 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd41b510-5787-4c7e-9e0b-22301cd49f54-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.148034 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd41b510-5787-4c7e-9e0b-22301cd49f54-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.148750 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd41b510-5787-4c7e-9e0b-22301cd49f54-config\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.149066 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd41b510-5787-4c7e-9e0b-22301cd49f54-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.149064 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd41b510-5787-4c7e-9e0b-22301cd49f54-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.152197 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.152227 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7ac64437-f09b-4db9-b12a-b139fb4f6e38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ac64437-f09b-4db9-b12a-b139fb4f6e38\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9e48e975ece6ef6164059f1c32eca2d7a967850344933a48e36a30bcf129ff22/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.152368 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd41b510-5787-4c7e-9e0b-22301cd49f54-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.152493 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd41b510-5787-4c7e-9e0b-22301cd49f54-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.153624 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd41b510-5787-4c7e-9e0b-22301cd49f54-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.168849 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k72t4\" (UniqueName: \"kubernetes.io/projected/cd41b510-5787-4c7e-9e0b-22301cd49f54-kube-api-access-k72t4\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.188670 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7ac64437-f09b-4db9-b12a-b139fb4f6e38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ac64437-f09b-4db9-b12a-b139fb4f6e38\") pod \"ovsdbserver-nb-0\" (UID: \"cd41b510-5787-4c7e-9e0b-22301cd49f54\") " pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:53 crc kubenswrapper[4750]: I0214 14:12:53.329630 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 14:12:54 crc kubenswrapper[4750]: W0214 14:12:54.293275 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ccd0d36_6fed_4aeb_b811_28cf48001750.slice/crio-3ec6a5c479ebf34e3c4211c7378564992d5f77025702202ee642f11321d96535 WatchSource:0}: Error finding container 3ec6a5c479ebf34e3c4211c7378564992d5f77025702202ee642f11321d96535: Status 404 returned error can't find the container with id 3ec6a5c479ebf34e3c4211c7378564992d5f77025702202ee642f11321d96535 Feb 14 14:12:55 crc kubenswrapper[4750]: I0214 14:12:55.026500 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9ccd0d36-6fed-4aeb-b811-28cf48001750","Type":"ContainerStarted","Data":"3ec6a5c479ebf34e3c4211c7378564992d5f77025702202ee642f11321d96535"} Feb 14 14:12:55 crc kubenswrapper[4750]: I0214 14:12:55.027617 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cb0d3b08-53c1-4396-9413-bf581fad715a","Type":"ContainerStarted","Data":"7824950b151ce476d25e8060abb6eb9f43220493343d9131f04a73b06a671c81"} Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.211023 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.212865 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.215097 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7lxml" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.215491 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.216302 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.218195 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.233078 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.246151 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcpgr\" (UniqueName: \"kubernetes.io/projected/27afbd74-b285-4efa-bd3f-33cc3c46363d-kube-api-access-hcpgr\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.246220 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27afbd74-b285-4efa-bd3f-33cc3c46363d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.246244 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/27afbd74-b285-4efa-bd3f-33cc3c46363d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.246292 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27afbd74-b285-4efa-bd3f-33cc3c46363d-config\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.246318 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/27afbd74-b285-4efa-bd3f-33cc3c46363d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.246463 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-187ae91c-fff9-44eb-a10b-ec25d94577ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-187ae91c-fff9-44eb-a10b-ec25d94577ea\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.246581 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27afbd74-b285-4efa-bd3f-33cc3c46363d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.246893 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/27afbd74-b285-4efa-bd3f-33cc3c46363d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.348514 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/27afbd74-b285-4efa-bd3f-33cc3c46363d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.348873 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcpgr\" (UniqueName: \"kubernetes.io/projected/27afbd74-b285-4efa-bd3f-33cc3c46363d-kube-api-access-hcpgr\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.349025 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27afbd74-b285-4efa-bd3f-33cc3c46363d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.349267 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/27afbd74-b285-4efa-bd3f-33cc3c46363d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.349414 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27afbd74-b285-4efa-bd3f-33cc3c46363d-config\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.349540 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/27afbd74-b285-4efa-bd3f-33cc3c46363d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.349689 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-187ae91c-fff9-44eb-a10b-ec25d94577ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-187ae91c-fff9-44eb-a10b-ec25d94577ea\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.349887 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27afbd74-b285-4efa-bd3f-33cc3c46363d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.356339 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.356586 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-187ae91c-fff9-44eb-a10b-ec25d94577ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-187ae91c-fff9-44eb-a10b-ec25d94577ea\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/351a8f5907cf66dc58ae7ec2b1eec16d9c4969be0483e04479e8127b24a08539/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.360177 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/27afbd74-b285-4efa-bd3f-33cc3c46363d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.360447 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27afbd74-b285-4efa-bd3f-33cc3c46363d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.360461 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27afbd74-b285-4efa-bd3f-33cc3c46363d-config\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.364901 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/27afbd74-b285-4efa-bd3f-33cc3c46363d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.365577 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/27afbd74-b285-4efa-bd3f-33cc3c46363d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.367284 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcpgr\" (UniqueName: \"kubernetes.io/projected/27afbd74-b285-4efa-bd3f-33cc3c46363d-kube-api-access-hcpgr\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.369370 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27afbd74-b285-4efa-bd3f-33cc3c46363d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.403279 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-187ae91c-fff9-44eb-a10b-ec25d94577ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-187ae91c-fff9-44eb-a10b-ec25d94577ea\") pod \"ovsdbserver-sb-0\" (UID: \"27afbd74-b285-4efa-bd3f-33cc3c46363d\") " pod="openstack/ovsdbserver-sb-0" Feb 14 14:12:57 crc kubenswrapper[4750]: I0214 14:12:57.532349 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 14:13:03 crc kubenswrapper[4750]: E0214 14:13:03.289762 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 14:13:03 crc kubenswrapper[4750]: E0214 14:13:03.290008 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vnkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-qzpnf_openstack(a1faf7a2-1884-456d-b2c1-e21edffa4422): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:13:03 crc kubenswrapper[4750]: E0214 14:13:03.291496 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" podUID="a1faf7a2-1884-456d-b2c1-e21edffa4422" Feb 14 14:13:03 crc kubenswrapper[4750]: E0214 14:13:03.321579 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 14:13:03 crc kubenswrapper[4750]: E0214 14:13:03.321750 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fh79b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xmlsp_openstack(b4d46fb2-6412-40f4-a194-07b8f4869fba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:13:03 crc kubenswrapper[4750]: E0214 14:13:03.323490 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" podUID="b4d46fb2-6412-40f4-a194-07b8f4869fba" Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.734462 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.832417 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vnkc\" (UniqueName: \"kubernetes.io/projected/a1faf7a2-1884-456d-b2c1-e21edffa4422-kube-api-access-5vnkc\") pod \"a1faf7a2-1884-456d-b2c1-e21edffa4422\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.832536 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-dns-svc\") pod \"a1faf7a2-1884-456d-b2c1-e21edffa4422\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.832688 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-config\") pod \"a1faf7a2-1884-456d-b2c1-e21edffa4422\" (UID: \"a1faf7a2-1884-456d-b2c1-e21edffa4422\") " Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.833243 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-config" (OuterVolumeSpecName: "config") pod "a1faf7a2-1884-456d-b2c1-e21edffa4422" (UID: "a1faf7a2-1884-456d-b2c1-e21edffa4422"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.833256 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1faf7a2-1884-456d-b2c1-e21edffa4422" (UID: "a1faf7a2-1884-456d-b2c1-e21edffa4422"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.836911 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1faf7a2-1884-456d-b2c1-e21edffa4422-kube-api-access-5vnkc" (OuterVolumeSpecName: "kube-api-access-5vnkc") pod "a1faf7a2-1884-456d-b2c1-e21edffa4422" (UID: "a1faf7a2-1884-456d-b2c1-e21edffa4422"). InnerVolumeSpecName "kube-api-access-5vnkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.928716 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.935443 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.935465 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vnkc\" (UniqueName: \"kubernetes.io/projected/a1faf7a2-1884-456d-b2c1-e21edffa4422-kube-api-access-5vnkc\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:04 crc kubenswrapper[4750]: I0214 14:13:04.935474 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1faf7a2-1884-456d-b2c1-e21edffa4422-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.037030 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4d46fb2-6412-40f4-a194-07b8f4869fba-config\") pod \"b4d46fb2-6412-40f4-a194-07b8f4869fba\" (UID: \"b4d46fb2-6412-40f4-a194-07b8f4869fba\") " Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.037489 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4d46fb2-6412-40f4-a194-07b8f4869fba-config" (OuterVolumeSpecName: "config") pod "b4d46fb2-6412-40f4-a194-07b8f4869fba" (UID: "b4d46fb2-6412-40f4-a194-07b8f4869fba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.037555 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh79b\" (UniqueName: \"kubernetes.io/projected/b4d46fb2-6412-40f4-a194-07b8f4869fba-kube-api-access-fh79b\") pod \"b4d46fb2-6412-40f4-a194-07b8f4869fba\" (UID: \"b4d46fb2-6412-40f4-a194-07b8f4869fba\") " Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.038197 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4d46fb2-6412-40f4-a194-07b8f4869fba-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.042660 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d46fb2-6412-40f4-a194-07b8f4869fba-kube-api-access-fh79b" (OuterVolumeSpecName: "kube-api-access-fh79b") pod "b4d46fb2-6412-40f4-a194-07b8f4869fba" (UID: "b4d46fb2-6412-40f4-a194-07b8f4869fba"). InnerVolumeSpecName "kube-api-access-fh79b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.124863 4750 generic.go:334] "Generic (PLEG): container finished" podID="b18a11b7-f81f-45f3-96c1-79780f287ad2" containerID="b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e" exitCode=0 Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.124938 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" event={"ID":"b18a11b7-f81f-45f3-96c1-79780f287ad2","Type":"ContainerDied","Data":"b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e"} Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.130636 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" event={"ID":"a1faf7a2-1884-456d-b2c1-e21edffa4422","Type":"ContainerDied","Data":"d335f1ac3770776002bded594fc5dc318145b0c72ee8ef057ec3c913c16188be"} Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.130755 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qzpnf" Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.134601 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.134789 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xmlsp" event={"ID":"b4d46fb2-6412-40f4-a194-07b8f4869fba","Type":"ContainerDied","Data":"ba73b4cd3b5911f58c482350d0d1142e8f19f82cb06953b64a5cb00d8b348705"} Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.137129 4750 generic.go:334] "Generic (PLEG): container finished" podID="d44aab2e-5a28-47fc-8598-d3b6f0d0729c" containerID="156d4ffaf0d2885882f5e166ebfafb2f4d94ff482b506f5984a8d609919485fd" exitCode=0 Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.137172 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" event={"ID":"d44aab2e-5a28-47fc-8598-d3b6f0d0729c","Type":"ContainerDied","Data":"156d4ffaf0d2885882f5e166ebfafb2f4d94ff482b506f5984a8d609919485fd"} Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.140432 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh79b\" (UniqueName: \"kubernetes.io/projected/b4d46fb2-6412-40f4-a194-07b8f4869fba-kube-api-access-fh79b\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.262768 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmlsp"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.272219 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmlsp"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.302213 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qzpnf"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.313492 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qzpnf"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.418239 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 14:13:05 crc kubenswrapper[4750]: W0214 14:13:05.491642 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd41b510_5787_4c7e_9e0b_22301cd49f54.slice/crio-f00745eb8770caa970f90b8d96adb46ca61f470739cc0d96276e7459b3e1603e WatchSource:0}: Error finding container f00745eb8770caa970f90b8d96adb46ca61f470739cc0d96276e7459b3e1603e: Status 404 returned error can't find the container with id f00745eb8770caa970f90b8d96adb46ca61f470739cc0d96276e7459b3e1603e Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.662571 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cc4f8f495-k9w4c"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.675822 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4dn6h"] Feb 14 14:13:05 crc kubenswrapper[4750]: W0214 14:13:05.676679 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7db9ddf8_5e65_4bc6_977c_25722237461f.slice/crio-1e8c46da9dd4659c7f2f6028cb36c8d1d4899c71963742a742df28dc42d9f5df WatchSource:0}: Error finding container 1e8c46da9dd4659c7f2f6028cb36c8d1d4899c71963742a742df28dc42d9f5df: Status 404 returned error can't find the container with id 1e8c46da9dd4659c7f2f6028cb36c8d1d4899c71963742a742df28dc42d9f5df Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.844305 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.857033 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.866396 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.876712 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.884349 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.892685 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 14:13:05 crc kubenswrapper[4750]: I0214 14:13:05.971560 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 14:13:05 crc kubenswrapper[4750]: W0214 14:13:05.992625 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod046d0778_1fd6_4cfa_b632_4379f20af2b7.slice/crio-d625b55be84eb41376b4f62e7c28c12cd7ecda03cdb5ebb795534da9ce676cec WatchSource:0}: Error finding container d625b55be84eb41376b4f62e7c28c12cd7ecda03cdb5ebb795534da9ce676cec: Status 404 returned error can't find the container with id d625b55be84eb41376b4f62e7c28c12cd7ecda03cdb5ebb795534da9ce676cec Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.059890 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bpd75"] Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.148761 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2136d6a8-25e9-4eff-946e-bbc49dab0b04","Type":"ContainerStarted","Data":"a6a3068e3e958939e73758fbcfc874514dbf494322b608ebff9d3e58501d4c8c"} Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.149742 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc4f8f495-k9w4c" event={"ID":"7db9ddf8-5e65-4bc6-977c-25722237461f","Type":"ContainerStarted","Data":"1e8c46da9dd4659c7f2f6028cb36c8d1d4899c71963742a742df28dc42d9f5df"} Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.150833 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerStarted","Data":"18ca32401ff5ded06bdf4d3591f6258a4b267bef449c5d106d2d4aaff52c3188"} Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.151832 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"046d0778-1fd6-4cfa-b632-4379f20af2b7","Type":"ContainerStarted","Data":"d625b55be84eb41376b4f62e7c28c12cd7ecda03cdb5ebb795534da9ce676cec"} Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.153136 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h" event={"ID":"761260d8-59af-48eb-bb26-aa7523be2d9d","Type":"ContainerStarted","Data":"aeda7ac837d5b0ea34dba97b182673c0fe469417795f4f4cc9c5d418e7ae00bb"} Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.154262 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cd41b510-5787-4c7e-9e0b-22301cd49f54","Type":"ContainerStarted","Data":"f00745eb8770caa970f90b8d96adb46ca61f470739cc0d96276e7459b3e1603e"} Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.156559 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" event={"ID":"b18a11b7-f81f-45f3-96c1-79780f287ad2","Type":"ContainerStarted","Data":"fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f"} Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.156633 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.158332 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" event={"ID":"d44aab2e-5a28-47fc-8598-d3b6f0d0729c","Type":"ContainerStarted","Data":"f537f18a6824f8433961050c19bccd2521790acfe515efe9907e5f46d40c8721"} Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.159376 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:13:06 crc kubenswrapper[4750]: W0214 14:13:06.172180 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42db0a00_1aa6_4754_840c_f93a2b927858.slice/crio-6132069fb710adbedacdb836c6535ebb1e5e1e4df93ead376230372e3b78e1e8 WatchSource:0}: Error finding container 6132069fb710adbedacdb836c6535ebb1e5e1e4df93ead376230372e3b78e1e8: Status 404 returned error can't find the container with id 6132069fb710adbedacdb836c6535ebb1e5e1e4df93ead376230372e3b78e1e8 Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.183643 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" podStartSLOduration=3.186869691 podStartE2EDuration="23.183621508s" podCreationTimestamp="2026-02-14 14:12:43 +0000 UTC" firstStartedPulling="2026-02-14 14:12:44.423310047 +0000 UTC m=+1236.449299518" lastFinishedPulling="2026-02-14 14:13:04.420061854 +0000 UTC m=+1256.446051335" observedRunningTime="2026-02-14 14:13:06.178311206 +0000 UTC m=+1258.204300687" watchObservedRunningTime="2026-02-14 14:13:06.183621508 +0000 UTC m=+1258.209610989" Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.202772 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" podStartSLOduration=3.265388314 podStartE2EDuration="23.202749572s" podCreationTimestamp="2026-02-14 14:12:43 +0000 UTC" firstStartedPulling="2026-02-14 14:12:44.606911761 +0000 UTC m=+1236.632901242" lastFinishedPulling="2026-02-14 14:13:04.544273019 +0000 UTC m=+1256.570262500" observedRunningTime="2026-02-14 14:13:06.197456421 +0000 UTC m=+1258.223445912" watchObservedRunningTime="2026-02-14 14:13:06.202749572 +0000 UTC m=+1258.228739053" Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.755370 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1faf7a2-1884-456d-b2c1-e21edffa4422" path="/var/lib/kubelet/pods/a1faf7a2-1884-456d-b2c1-e21edffa4422/volumes" Feb 14 14:13:06 crc kubenswrapper[4750]: I0214 14:13:06.756422 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d46fb2-6412-40f4-a194-07b8f4869fba" path="/var/lib/kubelet/pods/b4d46fb2-6412-40f4-a194-07b8f4869fba/volumes" Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.168033 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e9776fb-2263-407b-93a2-3f27f9e0635f","Type":"ContainerStarted","Data":"c9a4ef6c54fc6b654111853e02cca13de25656299a8ad0653d1add1443d4d7d1"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.170190 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8775619e-e8b9-4bee-987c-2cba92f6fcf3","Type":"ContainerStarted","Data":"9199a836dc535c97ea00db338dd3c3eb706b1efaadba65e0b4e87bbe403356f1"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.173849 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"42db0a00-1aa6-4754-840c-f93a2b927858","Type":"ContainerStarted","Data":"6132069fb710adbedacdb836c6535ebb1e5e1e4df93ead376230372e3b78e1e8"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.175321 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" event={"ID":"95744c6b-6feb-4934-b1b6-6d73a3c17ad0","Type":"ContainerStarted","Data":"136710a76732fa349e974dff9eefb32df6128ae88cb4c6bb33859710d4576d11"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.177053 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"27afbd74-b285-4efa-bd3f-33cc3c46363d","Type":"ContainerStarted","Data":"a4ca2f8006869bdfef7b28acdbe679c473fe51bb8d7683a4d466c455e77a0e35"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.180086 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9ccd0d36-6fed-4aeb-b811-28cf48001750","Type":"ContainerStarted","Data":"0fccd90bee9500e05f117ba916dd6afb5f3593b6429c3514da328a678356bd23"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.182607 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc4f8f495-k9w4c" event={"ID":"7db9ddf8-5e65-4bc6-977c-25722237461f","Type":"ContainerStarted","Data":"3eb79bbaf03635604791fca2d0143476e62b8d58015efe6fd22edd3e0e08ed17"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.186512 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f513b3ee-aa21-48f3-b5fa-395f0557292a","Type":"ContainerStarted","Data":"f621366506caeb02dee9c6bde64abfe76f9b07ca0844ab51f1565b84455dc23c"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.192003 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cb0d3b08-53c1-4396-9413-bf581fad715a","Type":"ContainerStarted","Data":"9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.199227 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bpd75" event={"ID":"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5","Type":"ContainerStarted","Data":"fe2a80f558d688fc49881f0181000cb7ee6bdf0d85d6d75556be7297bab356a3"} Feb 14 14:13:07 crc kubenswrapper[4750]: I0214 14:13:07.270799 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cc4f8f495-k9w4c" podStartSLOduration=17.270774853 podStartE2EDuration="17.270774853s" podCreationTimestamp="2026-02-14 14:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:13:07.265442712 +0000 UTC m=+1259.291432223" watchObservedRunningTime="2026-02-14 14:13:07.270774853 +0000 UTC m=+1259.296764324" Feb 14 14:13:11 crc kubenswrapper[4750]: I0214 14:13:11.282327 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:13:11 crc kubenswrapper[4750]: I0214 14:13:11.282965 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:13:11 crc kubenswrapper[4750]: I0214 14:13:11.290485 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:13:12 crc kubenswrapper[4750]: I0214 14:13:12.243716 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cc4f8f495-k9w4c" Feb 14 14:13:12 crc kubenswrapper[4750]: I0214 14:13:12.307931 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f459dfbdf-9lrz2"] Feb 14 14:13:13 crc kubenswrapper[4750]: I0214 14:13:13.662308 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.001284 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.064165 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zbknn"] Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.263233 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" podUID="b18a11b7-f81f-45f3-96c1-79780f287ad2" containerName="dnsmasq-dns" containerID="cri-o://fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f" gracePeriod=10 Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.849907 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.900326 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-dns-svc\") pod \"b18a11b7-f81f-45f3-96c1-79780f287ad2\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.900445 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-897fr\" (UniqueName: \"kubernetes.io/projected/b18a11b7-f81f-45f3-96c1-79780f287ad2-kube-api-access-897fr\") pod \"b18a11b7-f81f-45f3-96c1-79780f287ad2\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.900478 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-config\") pod \"b18a11b7-f81f-45f3-96c1-79780f287ad2\" (UID: \"b18a11b7-f81f-45f3-96c1-79780f287ad2\") " Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.913843 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18a11b7-f81f-45f3-96c1-79780f287ad2-kube-api-access-897fr" (OuterVolumeSpecName: "kube-api-access-897fr") pod "b18a11b7-f81f-45f3-96c1-79780f287ad2" (UID: "b18a11b7-f81f-45f3-96c1-79780f287ad2"). InnerVolumeSpecName "kube-api-access-897fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.957631 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b18a11b7-f81f-45f3-96c1-79780f287ad2" (UID: "b18a11b7-f81f-45f3-96c1-79780f287ad2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:14 crc kubenswrapper[4750]: I0214 14:13:14.965582 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-config" (OuterVolumeSpecName: "config") pod "b18a11b7-f81f-45f3-96c1-79780f287ad2" (UID: "b18a11b7-f81f-45f3-96c1-79780f287ad2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.003393 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.003435 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-897fr\" (UniqueName: \"kubernetes.io/projected/b18a11b7-f81f-45f3-96c1-79780f287ad2-kube-api-access-897fr\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.003449 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18a11b7-f81f-45f3-96c1-79780f287ad2-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.276554 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"046d0778-1fd6-4cfa-b632-4379f20af2b7","Type":"ContainerStarted","Data":"bc619036a0651f2705a33c21806adfd86d8a19832c0d600989d60e67ef051a30"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.276683 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.278850 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h" event={"ID":"761260d8-59af-48eb-bb26-aa7523be2d9d","Type":"ContainerStarted","Data":"210b3907d5c1309c5f2f6190229962f65797003704247526dcc5d55360da020e"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.279039 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4dn6h" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.289877 4750 generic.go:334] "Generic (PLEG): container finished" podID="b18a11b7-f81f-45f3-96c1-79780f287ad2" containerID="fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f" exitCode=0 Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.289940 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" event={"ID":"b18a11b7-f81f-45f3-96c1-79780f287ad2","Type":"ContainerDied","Data":"fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.289966 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" event={"ID":"b18a11b7-f81f-45f3-96c1-79780f287ad2","Type":"ContainerDied","Data":"292dc4f7fe4fbecd5f098b675aab5468af882a93e55f9b95601b3ad309b8616a"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.289984 4750 scope.go:117] "RemoveContainer" containerID="fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.290082 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zbknn" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.301146 4750 generic.go:334] "Generic (PLEG): container finished" podID="bc6ad499-faf9-47ce-8df2-57c77fb7e2b5" containerID="d30b1911a2a71a351c418ca8dbfa48e89c0a26de04f1245a20a54d391c28bc18" exitCode=0 Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.301228 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bpd75" event={"ID":"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5","Type":"ContainerDied","Data":"d30b1911a2a71a351c418ca8dbfa48e89c0a26de04f1245a20a54d391c28bc18"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.303794 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2136d6a8-25e9-4eff-946e-bbc49dab0b04","Type":"ContainerStarted","Data":"d7a8b6e701cbe4de9a29ef7f20f2a96d7a3cdb5fd9ed7381acddb5e202289ed9"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.303917 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.305394 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"42db0a00-1aa6-4754-840c-f93a2b927858","Type":"ContainerStarted","Data":"3cc700e28bd8aa14174809e3c1ed997acae748a1bfa0a0921904dc5db43a7637"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.317881 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.880355873 podStartE2EDuration="26.317856396s" podCreationTimestamp="2026-02-14 14:12:49 +0000 UTC" firstStartedPulling="2026-02-14 14:13:05.995506775 +0000 UTC m=+1258.021496246" lastFinishedPulling="2026-02-14 14:13:14.433007288 +0000 UTC m=+1266.458996769" observedRunningTime="2026-02-14 14:13:15.295743737 +0000 UTC m=+1267.321733248" watchObservedRunningTime="2026-02-14 14:13:15.317856396 +0000 UTC m=+1267.343845877" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.327561 4750 scope.go:117] "RemoveContainer" containerID="b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.331958 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e9776fb-2263-407b-93a2-3f27f9e0635f","Type":"ContainerStarted","Data":"0f6e67d38f169d04417e851b118f987f3c84f2a98741ba7e9b1d5cb409c4e677"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.338603 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" event={"ID":"95744c6b-6feb-4934-b1b6-6d73a3c17ad0","Type":"ContainerStarted","Data":"4a7693b10f2217b2afb61104eabd1499ba2e0a230306f5716bbb3b8cd3fd0d83"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.341416 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cd41b510-5787-4c7e-9e0b-22301cd49f54","Type":"ContainerStarted","Data":"a505f42dede46665ff92a5abe48f5c4da72f05cad23385257f2690b93d18871f"} Feb 14 14:13:15 crc kubenswrapper[4750]: E0214 14:13:15.342727 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc6ad499_faf9_47ce_8df2_57c77fb7e2b5.slice/crio-d30b1911a2a71a351c418ca8dbfa48e89c0a26de04f1245a20a54d391c28bc18.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc6ad499_faf9_47ce_8df2_57c77fb7e2b5.slice/crio-conmon-d30b1911a2a71a351c418ca8dbfa48e89c0a26de04f1245a20a54d391c28bc18.scope\": RecentStats: unable to find data in memory cache]" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.343793 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"27afbd74-b285-4efa-bd3f-33cc3c46363d","Type":"ContainerStarted","Data":"28689aabe1a9aa87eed2e492f1483e96f74f150c3ee47f4a25f121c62646bca3"} Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.344966 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4dn6h" podStartSLOduration=14.883279485 podStartE2EDuration="23.344945967s" podCreationTimestamp="2026-02-14 14:12:52 +0000 UTC" firstStartedPulling="2026-02-14 14:13:05.768627568 +0000 UTC m=+1257.794617049" lastFinishedPulling="2026-02-14 14:13:14.23029405 +0000 UTC m=+1266.256283531" observedRunningTime="2026-02-14 14:13:15.330952239 +0000 UTC m=+1267.356941720" watchObservedRunningTime="2026-02-14 14:13:15.344945967 +0000 UTC m=+1267.370935448" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.386863 4750 scope.go:117] "RemoveContainer" containerID="fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f" Feb 14 14:13:15 crc kubenswrapper[4750]: E0214 14:13:15.387777 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f\": container with ID starting with fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f not found: ID does not exist" containerID="fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.387808 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f"} err="failed to get container status \"fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f\": rpc error: code = NotFound desc = could not find container \"fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f\": container with ID starting with fecf059322cc33b8f7aade146c5add58f705a93390a77c1a969506706d0bf83f not found: ID does not exist" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.387832 4750 scope.go:117] "RemoveContainer" containerID="b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e" Feb 14 14:13:15 crc kubenswrapper[4750]: E0214 14:13:15.388105 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e\": container with ID starting with b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e not found: ID does not exist" containerID="b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.388202 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e"} err="failed to get container status \"b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e\": rpc error: code = NotFound desc = could not find container \"b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e\": container with ID starting with b561c00edd4fe00077981188c93817519753a547e7b90ef36c4626dd6634d94e not found: ID does not exist" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.429773 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zbknn"] Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.438416 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zbknn"] Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.440107 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.478292724 podStartE2EDuration="28.440085825s" podCreationTimestamp="2026-02-14 14:12:47 +0000 UTC" firstStartedPulling="2026-02-14 14:13:05.886718769 +0000 UTC m=+1257.912708250" lastFinishedPulling="2026-02-14 14:13:12.84851187 +0000 UTC m=+1264.874501351" observedRunningTime="2026-02-14 14:13:15.402698181 +0000 UTC m=+1267.428687662" watchObservedRunningTime="2026-02-14 14:13:15.440085825 +0000 UTC m=+1267.466075306" Feb 14 14:13:15 crc kubenswrapper[4750]: I0214 14:13:15.485629 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-pdpkt" podStartSLOduration=17.990306837 podStartE2EDuration="25.48560456s" podCreationTimestamp="2026-02-14 14:12:50 +0000 UTC" firstStartedPulling="2026-02-14 14:13:06.171205304 +0000 UTC m=+1258.197194795" lastFinishedPulling="2026-02-14 14:13:13.666503027 +0000 UTC m=+1265.692492518" observedRunningTime="2026-02-14 14:13:15.458960032 +0000 UTC m=+1267.484949513" watchObservedRunningTime="2026-02-14 14:13:15.48560456 +0000 UTC m=+1267.511594041" Feb 14 14:13:16 crc kubenswrapper[4750]: I0214 14:13:16.358431 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bpd75" event={"ID":"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5","Type":"ContainerStarted","Data":"d9995a3d66eca176d70f4293b046ec3fe3693381d08c5b5f6a06bf0f8fca0549"} Feb 14 14:13:16 crc kubenswrapper[4750]: I0214 14:13:16.754549 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18a11b7-f81f-45f3-96c1-79780f287ad2" path="/var/lib/kubelet/pods/b18a11b7-f81f-45f3-96c1-79780f287ad2/volumes" Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.369452 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerStarted","Data":"856c7e3b80fdff9660116d588a1dfeda0e7deb6bcbc8d3a246a83853fb939ae9"} Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.371516 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cd41b510-5787-4c7e-9e0b-22301cd49f54","Type":"ContainerStarted","Data":"1bab9aebb4c9d650e6e1807e964d49beda315674ccb21cfdf0b626f62ef91d49"} Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.373672 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"27afbd74-b285-4efa-bd3f-33cc3c46363d","Type":"ContainerStarted","Data":"3722ac0af7fc7c11517e4874ca9c06c5f33e904546f215a96bd849c849b797c1"} Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.375995 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bpd75" event={"ID":"bc6ad499-faf9-47ce-8df2-57c77fb7e2b5","Type":"ContainerStarted","Data":"bc886c125ab1bb314fdfc6ffacd29ce84cd42a42e0456d49a47fdabf8e342418"} Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.376154 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.441196 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bpd75" podStartSLOduration=18.221781045 podStartE2EDuration="25.441165887s" podCreationTimestamp="2026-02-14 14:12:52 +0000 UTC" firstStartedPulling="2026-02-14 14:13:06.184837072 +0000 UTC m=+1258.210826553" lastFinishedPulling="2026-02-14 14:13:13.404221914 +0000 UTC m=+1265.430211395" observedRunningTime="2026-02-14 14:13:17.428729353 +0000 UTC m=+1269.454718894" watchObservedRunningTime="2026-02-14 14:13:17.441165887 +0000 UTC m=+1269.467155408" Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.488289 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.738025394 podStartE2EDuration="26.488258517s" podCreationTimestamp="2026-02-14 14:12:51 +0000 UTC" firstStartedPulling="2026-02-14 14:13:05.493688175 +0000 UTC m=+1257.519677656" lastFinishedPulling="2026-02-14 14:13:16.243921288 +0000 UTC m=+1268.269910779" observedRunningTime="2026-02-14 14:13:17.484274364 +0000 UTC m=+1269.510263855" watchObservedRunningTime="2026-02-14 14:13:17.488258517 +0000 UTC m=+1269.514248028" Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.494266 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.428155262 podStartE2EDuration="21.494244507s" podCreationTimestamp="2026-02-14 14:12:56 +0000 UTC" firstStartedPulling="2026-02-14 14:13:06.183523655 +0000 UTC m=+1258.209513136" lastFinishedPulling="2026-02-14 14:13:16.24961289 +0000 UTC m=+1268.275602381" observedRunningTime="2026-02-14 14:13:17.44972979 +0000 UTC m=+1269.475719351" watchObservedRunningTime="2026-02-14 14:13:17.494244507 +0000 UTC m=+1269.520234018" Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.532624 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 14 14:13:17 crc kubenswrapper[4750]: I0214 14:13:17.822393 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:13:18 crc kubenswrapper[4750]: I0214 14:13:18.330226 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 14 14:13:18 crc kubenswrapper[4750]: I0214 14:13:18.533528 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 14 14:13:18 crc kubenswrapper[4750]: I0214 14:13:18.582755 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.398738 4750 generic.go:334] "Generic (PLEG): container finished" podID="42db0a00-1aa6-4754-840c-f93a2b927858" containerID="3cc700e28bd8aa14174809e3c1ed997acae748a1bfa0a0921904dc5db43a7637" exitCode=0 Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.398806 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"42db0a00-1aa6-4754-840c-f93a2b927858","Type":"ContainerDied","Data":"3cc700e28bd8aa14174809e3c1ed997acae748a1bfa0a0921904dc5db43a7637"} Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.402191 4750 generic.go:334] "Generic (PLEG): container finished" podID="8e9776fb-2263-407b-93a2-3f27f9e0635f" containerID="0f6e67d38f169d04417e851b118f987f3c84f2a98741ba7e9b1d5cb409c4e677" exitCode=0 Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.402231 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e9776fb-2263-407b-93a2-3f27f9e0635f","Type":"ContainerDied","Data":"0f6e67d38f169d04417e851b118f987f3c84f2a98741ba7e9b1d5cb409c4e677"} Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.476972 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.765983 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-66tc6"] Feb 14 14:13:19 crc kubenswrapper[4750]: E0214 14:13:19.766633 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18a11b7-f81f-45f3-96c1-79780f287ad2" containerName="dnsmasq-dns" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.766654 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18a11b7-f81f-45f3-96c1-79780f287ad2" containerName="dnsmasq-dns" Feb 14 14:13:19 crc kubenswrapper[4750]: E0214 14:13:19.766705 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18a11b7-f81f-45f3-96c1-79780f287ad2" containerName="init" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.766715 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18a11b7-f81f-45f3-96c1-79780f287ad2" containerName="init" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.766956 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18a11b7-f81f-45f3-96c1-79780f287ad2" containerName="dnsmasq-dns" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.768318 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.771872 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.777007 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-66tc6"] Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.824898 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6h75r"] Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.826394 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.829212 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.835402 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6h75r"] Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.939743 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.940064 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.940200 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-config\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.940326 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.940369 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-ovn-rundir\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.940399 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kv87\" (UniqueName: \"kubernetes.io/projected/982448b9-890e-4481-9cd0-e9aa6f5df646-kube-api-access-5kv87\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.940426 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8zn\" (UniqueName: \"kubernetes.io/projected/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-kube-api-access-mx8zn\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.940469 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-ovs-rundir\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.940505 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-combined-ca-bundle\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.940580 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-config\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.961440 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-66tc6"] Feb 14 14:13:19 crc kubenswrapper[4750]: E0214 14:13:19.962301 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-5kv87 ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" podUID="982448b9-890e-4481-9cd0-e9aa6f5df646" Feb 14 14:13:19 crc kubenswrapper[4750]: I0214 14:13:19.998363 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn8kf"] Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.000737 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.003084 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.006592 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn8kf"] Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042053 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042105 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-config\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042138 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042157 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-ovn-rundir\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042179 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kv87\" (UniqueName: \"kubernetes.io/projected/982448b9-890e-4481-9cd0-e9aa6f5df646-kube-api-access-5kv87\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042196 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8zn\" (UniqueName: \"kubernetes.io/projected/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-kube-api-access-mx8zn\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042227 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-ovs-rundir\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042255 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-combined-ca-bundle\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042310 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-config\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.042395 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.043030 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-ovs-rundir\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.043209 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-ovn-rundir\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.043505 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-config\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.043589 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.043586 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-config\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.043795 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.047690 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.048250 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-combined-ca-bundle\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.061141 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kv87\" (UniqueName: \"kubernetes.io/projected/982448b9-890e-4481-9cd0-e9aa6f5df646-kube-api-access-5kv87\") pod \"dnsmasq-dns-7f896c8c65-66tc6\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.065709 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8zn\" (UniqueName: \"kubernetes.io/projected/a7797e0a-0f7c-42a0-bbe1-1c9f525eea52-kube-api-access-mx8zn\") pod \"ovn-controller-metrics-6h75r\" (UID: \"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52\") " pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.143856 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6h75r" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.144517 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.144572 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-config\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.144602 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.144856 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8xh\" (UniqueName: \"kubernetes.io/projected/807d1caf-f2d5-4ea0-b345-d56408b986d6-kube-api-access-fr8xh\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.144958 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.199349 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.247539 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8xh\" (UniqueName: \"kubernetes.io/projected/807d1caf-f2d5-4ea0-b345-d56408b986d6-kube-api-access-fr8xh\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.247659 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.247705 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.247732 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-config\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.247768 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.248770 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.248836 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.248926 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.248965 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-config\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.284178 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8xh\" (UniqueName: \"kubernetes.io/projected/807d1caf-f2d5-4ea0-b345-d56408b986d6-kube-api-access-fr8xh\") pod \"dnsmasq-dns-86db49b7ff-jn8kf\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.315730 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.330235 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.389085 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.439596 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.479553 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.507281 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.552383 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kv87\" (UniqueName: \"kubernetes.io/projected/982448b9-890e-4481-9cd0-e9aa6f5df646-kube-api-access-5kv87\") pod \"982448b9-890e-4481-9cd0-e9aa6f5df646\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.552618 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-dns-svc\") pod \"982448b9-890e-4481-9cd0-e9aa6f5df646\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.552765 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-ovsdbserver-sb\") pod \"982448b9-890e-4481-9cd0-e9aa6f5df646\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.552821 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-config\") pod \"982448b9-890e-4481-9cd0-e9aa6f5df646\" (UID: \"982448b9-890e-4481-9cd0-e9aa6f5df646\") " Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.555154 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "982448b9-890e-4481-9cd0-e9aa6f5df646" (UID: "982448b9-890e-4481-9cd0-e9aa6f5df646"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.555472 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "982448b9-890e-4481-9cd0-e9aa6f5df646" (UID: "982448b9-890e-4481-9cd0-e9aa6f5df646"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.555702 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-config" (OuterVolumeSpecName: "config") pod "982448b9-890e-4481-9cd0-e9aa6f5df646" (UID: "982448b9-890e-4481-9cd0-e9aa6f5df646"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:20 crc kubenswrapper[4750]: I0214 14:13:20.561426 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982448b9-890e-4481-9cd0-e9aa6f5df646-kube-api-access-5kv87" (OuterVolumeSpecName: "kube-api-access-5kv87") pod "982448b9-890e-4481-9cd0-e9aa6f5df646" (UID: "982448b9-890e-4481-9cd0-e9aa6f5df646"). InnerVolumeSpecName "kube-api-access-5kv87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.652092 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6h75r"] Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.654724 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.654742 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.654752 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/982448b9-890e-4481-9cd0-e9aa6f5df646-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.654761 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kv87\" (UniqueName: \"kubernetes.io/projected/982448b9-890e-4481-9cd0-e9aa6f5df646-kube-api-access-5kv87\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.664670 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.666243 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.681773 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.682077 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.682400 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.682509 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9s4xg" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.759564 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.770883 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2hrz\" (UniqueName: \"kubernetes.io/projected/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-kube-api-access-n2hrz\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.770975 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-config\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.771032 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.771397 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.771442 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-scripts\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.771464 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.771481 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.869005 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn8kf"] Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.883786 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.883826 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-scripts\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.883845 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.883863 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.883919 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2hrz\" (UniqueName: \"kubernetes.io/projected/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-kube-api-access-n2hrz\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.883948 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-config\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.883973 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.885443 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.885989 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-config\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.886602 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-scripts\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.900007 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.906703 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.907892 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:20.934594 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2hrz\" (UniqueName: \"kubernetes.io/projected/12a3b14e-20a2-4845-af37-ae9e7a6ebbc7-kube-api-access-n2hrz\") pod \"ovn-northd-0\" (UID: \"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7\") " pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:21.115977 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:21.444565 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" event={"ID":"807d1caf-f2d5-4ea0-b345-d56408b986d6","Type":"ContainerStarted","Data":"8a44ba3bdf9b04e9e5eccb629b75cdf3d308c346b727a29544c9b65a2619aa11"} Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:21.445560 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6h75r" event={"ID":"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52","Type":"ContainerStarted","Data":"17901263677e8a2abf4129641ae7d8c23dee30a1ef174bb80e94939302b6758a"} Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:21.445776 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-66tc6" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:21.513815 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-66tc6"] Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:21.522824 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-66tc6"] Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.493745 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e9776fb-2263-407b-93a2-3f27f9e0635f","Type":"ContainerStarted","Data":"f3bb2274503e521f8b3f34e7f2c5d927f3fff657b1853fa0ee2f4161a003c056"} Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.497187 4750 generic.go:334] "Generic (PLEG): container finished" podID="807d1caf-f2d5-4ea0-b345-d56408b986d6" containerID="8b34863cd61bc4fcce09abffb22cb82f6d883f9352c7c43d3d04edbff8a50221" exitCode=0 Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.497313 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" event={"ID":"807d1caf-f2d5-4ea0-b345-d56408b986d6","Type":"ContainerDied","Data":"8b34863cd61bc4fcce09abffb22cb82f6d883f9352c7c43d3d04edbff8a50221"} Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.502463 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"42db0a00-1aa6-4754-840c-f93a2b927858","Type":"ContainerStarted","Data":"23ed9886873cb7eff6c4a8e322d88944aa720810f9b3ae6e8595d4e0c0fad40f"} Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.504759 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6h75r" event={"ID":"a7797e0a-0f7c-42a0-bbe1-1c9f525eea52","Type":"ContainerStarted","Data":"ba06919ced4c6a650d902c1048cbb28e0a3f36bcf489466ce934d97f2b9e15f3"} Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.537817 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=31.054575344 podStartE2EDuration="38.537796803s" podCreationTimestamp="2026-02-14 14:12:44 +0000 UTC" firstStartedPulling="2026-02-14 14:13:06.183242867 +0000 UTC m=+1258.209232348" lastFinishedPulling="2026-02-14 14:13:13.666464326 +0000 UTC m=+1265.692453807" observedRunningTime="2026-02-14 14:13:22.52747147 +0000 UTC m=+1274.553460961" watchObservedRunningTime="2026-02-14 14:13:22.537796803 +0000 UTC m=+1274.563786294" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.562054 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6h75r" podStartSLOduration=3.5620358530000003 podStartE2EDuration="3.562035853s" podCreationTimestamp="2026-02-14 14:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:13:22.547082988 +0000 UTC m=+1274.573072479" watchObservedRunningTime="2026-02-14 14:13:22.562035853 +0000 UTC m=+1274.588025334" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.626529 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.253999459 podStartE2EDuration="36.626510918s" podCreationTimestamp="2026-02-14 14:12:46 +0000 UTC" firstStartedPulling="2026-02-14 14:13:06.181364543 +0000 UTC m=+1258.207354024" lastFinishedPulling="2026-02-14 14:13:13.553876002 +0000 UTC m=+1265.579865483" observedRunningTime="2026-02-14 14:13:22.621621659 +0000 UTC m=+1274.647611140" watchObservedRunningTime="2026-02-14 14:13:22.626510918 +0000 UTC m=+1274.652500399" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.756953 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982448b9-890e-4481-9cd0-e9aa6f5df646" path="/var/lib/kubelet/pods/982448b9-890e-4481-9cd0-e9aa6f5df646/volumes" Feb 14 14:13:22 crc kubenswrapper[4750]: I0214 14:13:22.790199 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 14 14:13:23 crc kubenswrapper[4750]: I0214 14:13:23.340407 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 14:13:23 crc kubenswrapper[4750]: W0214 14:13:23.341740 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12a3b14e_20a2_4845_af37_ae9e7a6ebbc7.slice/crio-ad041a559594f2069ae715648560244eebab365fee48d7797f5acdd75d398538 WatchSource:0}: Error finding container ad041a559594f2069ae715648560244eebab365fee48d7797f5acdd75d398538: Status 404 returned error can't find the container with id ad041a559594f2069ae715648560244eebab365fee48d7797f5acdd75d398538 Feb 14 14:13:23 crc kubenswrapper[4750]: I0214 14:13:23.514097 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" event={"ID":"807d1caf-f2d5-4ea0-b345-d56408b986d6","Type":"ContainerStarted","Data":"1c7f3f28eb622effc1329bb6464211b0e2243527f545fd0b7c786ed21343df35"} Feb 14 14:13:23 crc kubenswrapper[4750]: I0214 14:13:23.514241 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:23 crc kubenswrapper[4750]: I0214 14:13:23.515137 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7","Type":"ContainerStarted","Data":"ad041a559594f2069ae715648560244eebab365fee48d7797f5acdd75d398538"} Feb 14 14:13:23 crc kubenswrapper[4750]: I0214 14:13:23.536693 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" podStartSLOduration=4.536675647 podStartE2EDuration="4.536675647s" podCreationTimestamp="2026-02-14 14:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:13:23.528307299 +0000 UTC m=+1275.554296810" watchObservedRunningTime="2026-02-14 14:13:23.536675647 +0000 UTC m=+1275.562665128" Feb 14 14:13:24 crc kubenswrapper[4750]: I0214 14:13:24.525746 4750 generic.go:334] "Generic (PLEG): container finished" podID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerID="856c7e3b80fdff9660116d588a1dfeda0e7deb6bcbc8d3a246a83853fb939ae9" exitCode=0 Feb 14 14:13:24 crc kubenswrapper[4750]: I0214 14:13:24.525902 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerDied","Data":"856c7e3b80fdff9660116d588a1dfeda0e7deb6bcbc8d3a246a83853fb939ae9"} Feb 14 14:13:25 crc kubenswrapper[4750]: I0214 14:13:25.541050 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7","Type":"ContainerStarted","Data":"2026c5a5c24823b97d5045fa4c4dd7635f6dc3b6599ddb8335e2004888355540"} Feb 14 14:13:25 crc kubenswrapper[4750]: I0214 14:13:25.541607 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"12a3b14e-20a2-4845-af37-ae9e7a6ebbc7","Type":"ContainerStarted","Data":"ea8a9c157d34a92289d19a1ed2f6cee28ab488faaa159c71003f3ef7ffad357d"} Feb 14 14:13:25 crc kubenswrapper[4750]: I0214 14:13:25.541655 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 14 14:13:25 crc kubenswrapper[4750]: I0214 14:13:25.561442 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.198165351 podStartE2EDuration="5.561421173s" podCreationTimestamp="2026-02-14 14:13:20 +0000 UTC" firstStartedPulling="2026-02-14 14:13:23.343942253 +0000 UTC m=+1275.369931734" lastFinishedPulling="2026-02-14 14:13:24.707198075 +0000 UTC m=+1276.733187556" observedRunningTime="2026-02-14 14:13:25.560186138 +0000 UTC m=+1277.586175619" watchObservedRunningTime="2026-02-14 14:13:25.561421173 +0000 UTC m=+1277.587410664" Feb 14 14:13:26 crc kubenswrapper[4750]: I0214 14:13:26.260747 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 14:13:26 crc kubenswrapper[4750]: I0214 14:13:26.260796 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 14:13:27 crc kubenswrapper[4750]: I0214 14:13:27.094865 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 14 14:13:27 crc kubenswrapper[4750]: I0214 14:13:27.176472 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 14 14:13:27 crc kubenswrapper[4750]: I0214 14:13:27.419195 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 14:13:27 crc kubenswrapper[4750]: I0214 14:13:27.419322 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.817394 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3801-account-create-update-vcr52"] Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.820280 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.824170 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.845096 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8mdvr"] Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.846427 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.854682 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8mdvr"] Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.876579 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3801-account-create-update-vcr52"] Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.905616 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-operator-scripts\") pod \"keystone-3801-account-create-update-vcr52\" (UID: \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\") " pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.905859 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mtc\" (UniqueName: \"kubernetes.io/projected/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-kube-api-access-w4mtc\") pod \"keystone-3801-account-create-update-vcr52\" (UID: \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\") " pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.971173 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-sttwb"] Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.972460 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sttwb" Feb 14 14:13:28 crc kubenswrapper[4750]: I0214 14:13:28.992424 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sttwb"] Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.007553 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnhm6\" (UniqueName: \"kubernetes.io/projected/640eb31b-ab86-4684-9b86-9fab7eb5c26a-kube-api-access-tnhm6\") pod \"keystone-db-create-8mdvr\" (UID: \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\") " pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.007650 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mtc\" (UniqueName: \"kubernetes.io/projected/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-kube-api-access-w4mtc\") pod \"keystone-3801-account-create-update-vcr52\" (UID: \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\") " pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.007716 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-operator-scripts\") pod \"keystone-3801-account-create-update-vcr52\" (UID: \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\") " pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.007755 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/640eb31b-ab86-4684-9b86-9fab7eb5c26a-operator-scripts\") pod \"keystone-db-create-8mdvr\" (UID: \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\") " pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.008751 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-operator-scripts\") pod \"keystone-3801-account-create-update-vcr52\" (UID: \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\") " pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.068918 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-24be-account-create-update-skxg8"] Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.070394 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.078443 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.078752 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mtc\" (UniqueName: \"kubernetes.io/projected/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-kube-api-access-w4mtc\") pod \"keystone-3801-account-create-update-vcr52\" (UID: \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\") " pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.080063 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-24be-account-create-update-skxg8"] Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.109710 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnhm6\" (UniqueName: \"kubernetes.io/projected/640eb31b-ab86-4684-9b86-9fab7eb5c26a-kube-api-access-tnhm6\") pod \"keystone-db-create-8mdvr\" (UID: \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\") " pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.109774 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwx5m\" (UniqueName: \"kubernetes.io/projected/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-kube-api-access-bwx5m\") pod \"placement-db-create-sttwb\" (UID: \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\") " pod="openstack/placement-db-create-sttwb" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.109852 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-operator-scripts\") pod \"placement-db-create-sttwb\" (UID: \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\") " pod="openstack/placement-db-create-sttwb" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.109889 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/640eb31b-ab86-4684-9b86-9fab7eb5c26a-operator-scripts\") pod \"keystone-db-create-8mdvr\" (UID: \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\") " pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.111059 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/640eb31b-ab86-4684-9b86-9fab7eb5c26a-operator-scripts\") pod \"keystone-db-create-8mdvr\" (UID: \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\") " pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.128601 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnhm6\" (UniqueName: \"kubernetes.io/projected/640eb31b-ab86-4684-9b86-9fab7eb5c26a-kube-api-access-tnhm6\") pod \"keystone-db-create-8mdvr\" (UID: \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\") " pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.144501 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.202697 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.211357 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7qtf\" (UniqueName: \"kubernetes.io/projected/7a074c53-8712-4bea-a4b9-5af6f3e8daab-kube-api-access-v7qtf\") pod \"placement-24be-account-create-update-skxg8\" (UID: \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\") " pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.211424 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-operator-scripts\") pod \"placement-db-create-sttwb\" (UID: \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\") " pod="openstack/placement-db-create-sttwb" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.211584 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a074c53-8712-4bea-a4b9-5af6f3e8daab-operator-scripts\") pod \"placement-24be-account-create-update-skxg8\" (UID: \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\") " pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.211618 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwx5m\" (UniqueName: \"kubernetes.io/projected/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-kube-api-access-bwx5m\") pod \"placement-db-create-sttwb\" (UID: \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\") " pod="openstack/placement-db-create-sttwb" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.212579 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-operator-scripts\") pod \"placement-db-create-sttwb\" (UID: \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\") " pod="openstack/placement-db-create-sttwb" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.228353 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwx5m\" (UniqueName: \"kubernetes.io/projected/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-kube-api-access-bwx5m\") pod \"placement-db-create-sttwb\" (UID: \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\") " pod="openstack/placement-db-create-sttwb" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.307016 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sttwb" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.313156 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a074c53-8712-4bea-a4b9-5af6f3e8daab-operator-scripts\") pod \"placement-24be-account-create-update-skxg8\" (UID: \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\") " pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.313246 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7qtf\" (UniqueName: \"kubernetes.io/projected/7a074c53-8712-4bea-a4b9-5af6f3e8daab-kube-api-access-v7qtf\") pod \"placement-24be-account-create-update-skxg8\" (UID: \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\") " pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.314289 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a074c53-8712-4bea-a4b9-5af6f3e8daab-operator-scripts\") pod \"placement-24be-account-create-update-skxg8\" (UID: \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\") " pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.329292 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7qtf\" (UniqueName: \"kubernetes.io/projected/7a074c53-8712-4bea-a4b9-5af6f3e8daab-kube-api-access-v7qtf\") pod \"placement-24be-account-create-update-skxg8\" (UID: \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\") " pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.417769 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.747948 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.901153 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.969252 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn8kf"] Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.969520 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" podUID="807d1caf-f2d5-4ea0-b345-d56408b986d6" containerName="dnsmasq-dns" containerID="cri-o://1c7f3f28eb622effc1329bb6464211b0e2243527f545fd0b7c786ed21343df35" gracePeriod=10 Feb 14 14:13:29 crc kubenswrapper[4750]: I0214 14:13:29.976918 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.007051 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-4cmz6"] Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.012066 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.028949 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4cmz6"] Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.129835 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-config\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.129875 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.129895 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.129974 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpm7h\" (UniqueName: \"kubernetes.io/projected/a357c661-43db-45de-af4d-930aa51c9743-kube-api-access-cpm7h\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.130042 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-dns-svc\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.146042 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bjdpv"] Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.147306 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.170161 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bjdpv"] Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.231517 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-config\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.231570 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.231595 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.231677 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpm7h\" (UniqueName: \"kubernetes.io/projected/a357c661-43db-45de-af4d-930aa51c9743-kube-api-access-cpm7h\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.231735 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jwvk\" (UniqueName: \"kubernetes.io/projected/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-kube-api-access-6jwvk\") pod \"mysqld-exporter-openstack-db-create-bjdpv\" (UID: \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\") " pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.231760 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-bjdpv\" (UID: \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\") " pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.231791 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-dns-svc\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.232354 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-config\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.232661 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-dns-svc\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.233173 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.233465 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.269577 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpm7h\" (UniqueName: \"kubernetes.io/projected/a357c661-43db-45de-af4d-930aa51c9743-kube-api-access-cpm7h\") pod \"dnsmasq-dns-698758b865-4cmz6\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.316594 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" podUID="807d1caf-f2d5-4ea0-b345-d56408b986d6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.333527 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jwvk\" (UniqueName: \"kubernetes.io/projected/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-kube-api-access-6jwvk\") pod \"mysqld-exporter-openstack-db-create-bjdpv\" (UID: \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\") " pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.333569 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-bjdpv\" (UID: \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\") " pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.334639 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-bjdpv\" (UID: \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\") " pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.336382 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-d1e0-account-create-update-mrqds"] Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.342828 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.346902 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.365681 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jwvk\" (UniqueName: \"kubernetes.io/projected/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-kube-api-access-6jwvk\") pod \"mysqld-exporter-openstack-db-create-bjdpv\" (UID: \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\") " pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.371053 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d1e0-account-create-update-mrqds"] Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.394764 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.437277 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46r2\" (UniqueName: \"kubernetes.io/projected/bb695f49-57cd-490d-a9ab-208a33015140-kube-api-access-j46r2\") pod \"mysqld-exporter-d1e0-account-create-update-mrqds\" (UID: \"bb695f49-57cd-490d-a9ab-208a33015140\") " pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.437457 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb695f49-57cd-490d-a9ab-208a33015140-operator-scripts\") pod \"mysqld-exporter-d1e0-account-create-update-mrqds\" (UID: \"bb695f49-57cd-490d-a9ab-208a33015140\") " pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.469107 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.540093 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j46r2\" (UniqueName: \"kubernetes.io/projected/bb695f49-57cd-490d-a9ab-208a33015140-kube-api-access-j46r2\") pod \"mysqld-exporter-d1e0-account-create-update-mrqds\" (UID: \"bb695f49-57cd-490d-a9ab-208a33015140\") " pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.540503 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb695f49-57cd-490d-a9ab-208a33015140-operator-scripts\") pod \"mysqld-exporter-d1e0-account-create-update-mrqds\" (UID: \"bb695f49-57cd-490d-a9ab-208a33015140\") " pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.541437 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb695f49-57cd-490d-a9ab-208a33015140-operator-scripts\") pod \"mysqld-exporter-d1e0-account-create-update-mrqds\" (UID: \"bb695f49-57cd-490d-a9ab-208a33015140\") " pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.558507 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46r2\" (UniqueName: \"kubernetes.io/projected/bb695f49-57cd-490d-a9ab-208a33015140-kube-api-access-j46r2\") pod \"mysqld-exporter-d1e0-account-create-update-mrqds\" (UID: \"bb695f49-57cd-490d-a9ab-208a33015140\") " pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.595536 4750 generic.go:334] "Generic (PLEG): container finished" podID="807d1caf-f2d5-4ea0-b345-d56408b986d6" containerID="1c7f3f28eb622effc1329bb6464211b0e2243527f545fd0b7c786ed21343df35" exitCode=0 Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.595621 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" event={"ID":"807d1caf-f2d5-4ea0-b345-d56408b986d6","Type":"ContainerDied","Data":"1c7f3f28eb622effc1329bb6464211b0e2243527f545fd0b7c786ed21343df35"} Feb 14 14:13:30 crc kubenswrapper[4750]: I0214 14:13:30.726064 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.099681 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.109665 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.114731 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.114753 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-54gqp" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.115239 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.115552 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.127034 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.151742 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e623022c-0cda-4463-b5e1-3157a1f8c1c1-cache\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.151788 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b98g6\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-kube-api-access-b98g6\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.151915 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e05f5022-f9f6-41f4-97e4-dbc59ffad410\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e05f5022-f9f6-41f4-97e4-dbc59ffad410\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.151946 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.151986 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e623022c-0cda-4463-b5e1-3157a1f8c1c1-lock\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.152009 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e623022c-0cda-4463-b5e1-3157a1f8c1c1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.257146 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e05f5022-f9f6-41f4-97e4-dbc59ffad410\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e05f5022-f9f6-41f4-97e4-dbc59ffad410\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.257203 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.257247 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e623022c-0cda-4463-b5e1-3157a1f8c1c1-lock\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.257273 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e623022c-0cda-4463-b5e1-3157a1f8c1c1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.257316 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e623022c-0cda-4463-b5e1-3157a1f8c1c1-cache\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.257337 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b98g6\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-kube-api-access-b98g6\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: E0214 14:13:31.257416 4750 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 14:13:31 crc kubenswrapper[4750]: E0214 14:13:31.257438 4750 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 14:13:31 crc kubenswrapper[4750]: E0214 14:13:31.257497 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift podName:e623022c-0cda-4463-b5e1-3157a1f8c1c1 nodeName:}" failed. No retries permitted until 2026-02-14 14:13:31.757474296 +0000 UTC m=+1283.783463777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift") pod "swift-storage-0" (UID: "e623022c-0cda-4463-b5e1-3157a1f8c1c1") : configmap "swift-ring-files" not found Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.258054 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e623022c-0cda-4463-b5e1-3157a1f8c1c1-cache\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.258341 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e623022c-0cda-4463-b5e1-3157a1f8c1c1-lock\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.263394 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.263439 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e05f5022-f9f6-41f4-97e4-dbc59ffad410\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e05f5022-f9f6-41f4-97e4-dbc59ffad410\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1df9b62e8cf4cb1ff26dec28c5cd381f9d5743bc049658c075753111b4a293b/globalmount\"" pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.266523 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e623022c-0cda-4463-b5e1-3157a1f8c1c1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.278961 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b98g6\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-kube-api-access-b98g6\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.325247 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e05f5022-f9f6-41f4-97e4-dbc59ffad410\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e05f5022-f9f6-41f4-97e4-dbc59ffad410\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.610312 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" event={"ID":"807d1caf-f2d5-4ea0-b345-d56408b986d6","Type":"ContainerDied","Data":"8a44ba3bdf9b04e9e5eccb629b75cdf3d308c346b727a29544c9b65a2619aa11"} Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.610803 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a44ba3bdf9b04e9e5eccb629b75cdf3d308c346b727a29544c9b65a2619aa11" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.639096 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.666960 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-config\") pod \"807d1caf-f2d5-4ea0-b345-d56408b986d6\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.667030 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-sb\") pod \"807d1caf-f2d5-4ea0-b345-d56408b986d6\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.667104 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-nb\") pod \"807d1caf-f2d5-4ea0-b345-d56408b986d6\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.667162 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-dns-svc\") pod \"807d1caf-f2d5-4ea0-b345-d56408b986d6\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.667451 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr8xh\" (UniqueName: \"kubernetes.io/projected/807d1caf-f2d5-4ea0-b345-d56408b986d6-kube-api-access-fr8xh\") pod \"807d1caf-f2d5-4ea0-b345-d56408b986d6\" (UID: \"807d1caf-f2d5-4ea0-b345-d56408b986d6\") " Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.676802 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807d1caf-f2d5-4ea0-b345-d56408b986d6-kube-api-access-fr8xh" (OuterVolumeSpecName: "kube-api-access-fr8xh") pod "807d1caf-f2d5-4ea0-b345-d56408b986d6" (UID: "807d1caf-f2d5-4ea0-b345-d56408b986d6"). InnerVolumeSpecName "kube-api-access-fr8xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.723971 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-config" (OuterVolumeSpecName: "config") pod "807d1caf-f2d5-4ea0-b345-d56408b986d6" (UID: "807d1caf-f2d5-4ea0-b345-d56408b986d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.724700 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "807d1caf-f2d5-4ea0-b345-d56408b986d6" (UID: "807d1caf-f2d5-4ea0-b345-d56408b986d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.739577 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "807d1caf-f2d5-4ea0-b345-d56408b986d6" (UID: "807d1caf-f2d5-4ea0-b345-d56408b986d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.748825 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "807d1caf-f2d5-4ea0-b345-d56408b986d6" (UID: "807d1caf-f2d5-4ea0-b345-d56408b986d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.770215 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.770319 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr8xh\" (UniqueName: \"kubernetes.io/projected/807d1caf-f2d5-4ea0-b345-d56408b986d6-kube-api-access-fr8xh\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.770332 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.770342 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.770350 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.770357 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/807d1caf-f2d5-4ea0-b345-d56408b986d6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:31 crc kubenswrapper[4750]: E0214 14:13:31.770429 4750 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 14:13:31 crc kubenswrapper[4750]: E0214 14:13:31.770445 4750 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 14:13:31 crc kubenswrapper[4750]: E0214 14:13:31.770488 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift podName:e623022c-0cda-4463-b5e1-3157a1f8c1c1 nodeName:}" failed. No retries permitted until 2026-02-14 14:13:32.770473314 +0000 UTC m=+1284.796462795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift") pod "swift-storage-0" (UID: "e623022c-0cda-4463-b5e1-3157a1f8c1c1") : configmap "swift-ring-files" not found Feb 14 14:13:31 crc kubenswrapper[4750]: W0214 14:13:31.826277 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a074c53_8712_4bea_a4b9_5af6f3e8daab.slice/crio-f215bf1c0adf18f73a83df2fcbcccc30d2dc93a8272e5c3093c83ae5a970aec8 WatchSource:0}: Error finding container f215bf1c0adf18f73a83df2fcbcccc30d2dc93a8272e5c3093c83ae5a970aec8: Status 404 returned error can't find the container with id f215bf1c0adf18f73a83df2fcbcccc30d2dc93a8272e5c3093c83ae5a970aec8 Feb 14 14:13:31 crc kubenswrapper[4750]: I0214 14:13:31.827364 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-24be-account-create-update-skxg8"] Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.243157 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8mdvr"] Feb 14 14:13:32 crc kubenswrapper[4750]: W0214 14:13:32.244037 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod640eb31b_ab86_4684_9b86_9fab7eb5c26a.slice/crio-f6ebae10c986e79e08269c7c6c842a8693b225d5b56768d4c50ee404925d40f7 WatchSource:0}: Error finding container f6ebae10c986e79e08269c7c6c842a8693b225d5b56768d4c50ee404925d40f7: Status 404 returned error can't find the container with id f6ebae10c986e79e08269c7c6c842a8693b225d5b56768d4c50ee404925d40f7 Feb 14 14:13:32 crc kubenswrapper[4750]: W0214 14:13:32.252051 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a5a81c3_f760_45a0_aaff_5a5252f1e4ac.slice/crio-c4148a61427aca5edb3f2dac205fb11ef110befe8290581df0e456427100c6e5 WatchSource:0}: Error finding container c4148a61427aca5edb3f2dac205fb11ef110befe8290581df0e456427100c6e5: Status 404 returned error can't find the container with id c4148a61427aca5edb3f2dac205fb11ef110befe8290581df0e456427100c6e5 Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.252887 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sttwb"] Feb 14 14:13:32 crc kubenswrapper[4750]: W0214 14:13:32.260560 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ffd32b9_4e66_43e5_b67c_d6b7a9cce266.slice/crio-b0f3333a8afadf656d08dabec4aeef494d1a3d49e7e840366315f317bb4e13fc WatchSource:0}: Error finding container b0f3333a8afadf656d08dabec4aeef494d1a3d49e7e840366315f317bb4e13fc: Status 404 returned error can't find the container with id b0f3333a8afadf656d08dabec4aeef494d1a3d49e7e840366315f317bb4e13fc Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.279234 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d1e0-account-create-update-mrqds"] Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.289004 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3801-account-create-update-vcr52"] Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.302781 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4cmz6"] Feb 14 14:13:32 crc kubenswrapper[4750]: W0214 14:13:32.309752 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda357c661_43db_45de_af4d_930aa51c9743.slice/crio-2261f6ab5da71eb79a282d5f2e854d2dad2a0e4bfcc7364f2a9275da1038261f WatchSource:0}: Error finding container 2261f6ab5da71eb79a282d5f2e854d2dad2a0e4bfcc7364f2a9275da1038261f: Status 404 returned error can't find the container with id 2261f6ab5da71eb79a282d5f2e854d2dad2a0e4bfcc7364f2a9275da1038261f Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.321956 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bjdpv"] Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.628691 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerStarted","Data":"abde33cab497e7db6cf9f65c9368641100a293e5fd8c912a95a23097f661a56f"} Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.637999 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" event={"ID":"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3","Type":"ContainerStarted","Data":"93fcc651b2c9bfec0d19505e554b0e08616de2c8c7be24a2a41126e03d139594"} Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.641585 4750 generic.go:334] "Generic (PLEG): container finished" podID="7a074c53-8712-4bea-a4b9-5af6f3e8daab" containerID="5f26d4ca29e13fdadf1a72ef77a23842620ce12a318be0410e8ddbc7206d9d27" exitCode=0 Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.641651 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-24be-account-create-update-skxg8" event={"ID":"7a074c53-8712-4bea-a4b9-5af6f3e8daab","Type":"ContainerDied","Data":"5f26d4ca29e13fdadf1a72ef77a23842620ce12a318be0410e8ddbc7206d9d27"} Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.641672 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-24be-account-create-update-skxg8" event={"ID":"7a074c53-8712-4bea-a4b9-5af6f3e8daab","Type":"ContainerStarted","Data":"f215bf1c0adf18f73a83df2fcbcccc30d2dc93a8272e5c3093c83ae5a970aec8"} Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.643534 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" event={"ID":"bb695f49-57cd-490d-a9ab-208a33015140","Type":"ContainerStarted","Data":"3440a1c74ae7e115876e7e1f6e9bdf6395d065b3e6f4685db83a1a986a61584b"} Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.645278 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sttwb" event={"ID":"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac","Type":"ContainerStarted","Data":"c4148a61427aca5edb3f2dac205fb11ef110befe8290581df0e456427100c6e5"} Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.646591 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4cmz6" event={"ID":"a357c661-43db-45de-af4d-930aa51c9743","Type":"ContainerStarted","Data":"2261f6ab5da71eb79a282d5f2e854d2dad2a0e4bfcc7364f2a9275da1038261f"} Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.649469 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3801-account-create-update-vcr52" event={"ID":"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266","Type":"ContainerStarted","Data":"b0f3333a8afadf656d08dabec4aeef494d1a3d49e7e840366315f317bb4e13fc"} Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.651173 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8mdvr" event={"ID":"640eb31b-ab86-4684-9b86-9fab7eb5c26a","Type":"ContainerStarted","Data":"f6ebae10c986e79e08269c7c6c842a8693b225d5b56768d4c50ee404925d40f7"} Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.651220 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn8kf" Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.692747 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn8kf"] Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.704751 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn8kf"] Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.760827 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807d1caf-f2d5-4ea0-b345-d56408b986d6" path="/var/lib/kubelet/pods/807d1caf-f2d5-4ea0-b345-d56408b986d6/volumes" Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.810468 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:32 crc kubenswrapper[4750]: E0214 14:13:32.810753 4750 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 14:13:32 crc kubenswrapper[4750]: E0214 14:13:32.810780 4750 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 14:13:32 crc kubenswrapper[4750]: E0214 14:13:32.810832 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift podName:e623022c-0cda-4463-b5e1-3157a1f8c1c1 nodeName:}" failed. No retries permitted until 2026-02-14 14:13:34.810813006 +0000 UTC m=+1286.836802487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift") pod "swift-storage-0" (UID: "e623022c-0cda-4463-b5e1-3157a1f8c1c1") : configmap "swift-ring-files" not found Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.924547 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-p7pt9"] Feb 14 14:13:32 crc kubenswrapper[4750]: E0214 14:13:32.924913 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807d1caf-f2d5-4ea0-b345-d56408b986d6" containerName="dnsmasq-dns" Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.924929 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="807d1caf-f2d5-4ea0-b345-d56408b986d6" containerName="dnsmasq-dns" Feb 14 14:13:32 crc kubenswrapper[4750]: E0214 14:13:32.924944 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807d1caf-f2d5-4ea0-b345-d56408b986d6" containerName="init" Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.924950 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="807d1caf-f2d5-4ea0-b345-d56408b986d6" containerName="init" Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.925163 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="807d1caf-f2d5-4ea0-b345-d56408b986d6" containerName="dnsmasq-dns" Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.925813 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:32 crc kubenswrapper[4750]: I0214 14:13:32.935496 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p7pt9"] Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.048323 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9e2a-account-create-update-hdhfj"] Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.050097 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.052738 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.059530 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9e2a-account-create-update-hdhfj"] Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.117560 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvbz\" (UniqueName: \"kubernetes.io/projected/e18bd93a-d349-4cc1-86ad-9bd86df1e566-kube-api-access-sfvbz\") pod \"glance-db-create-p7pt9\" (UID: \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\") " pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.117926 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18bd93a-d349-4cc1-86ad-9bd86df1e566-operator-scripts\") pod \"glance-db-create-p7pt9\" (UID: \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\") " pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.219878 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18bd93a-d349-4cc1-86ad-9bd86df1e566-operator-scripts\") pod \"glance-db-create-p7pt9\" (UID: \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\") " pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.220026 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvbz\" (UniqueName: \"kubernetes.io/projected/e18bd93a-d349-4cc1-86ad-9bd86df1e566-kube-api-access-sfvbz\") pod \"glance-db-create-p7pt9\" (UID: \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\") " pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.220240 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae657046-3a10-4db8-936f-ff96a836579f-operator-scripts\") pod \"glance-9e2a-account-create-update-hdhfj\" (UID: \"ae657046-3a10-4db8-936f-ff96a836579f\") " pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.220306 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qz4\" (UniqueName: \"kubernetes.io/projected/ae657046-3a10-4db8-936f-ff96a836579f-kube-api-access-57qz4\") pod \"glance-9e2a-account-create-update-hdhfj\" (UID: \"ae657046-3a10-4db8-936f-ff96a836579f\") " pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.221295 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18bd93a-d349-4cc1-86ad-9bd86df1e566-operator-scripts\") pod \"glance-db-create-p7pt9\" (UID: \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\") " pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.241921 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvbz\" (UniqueName: \"kubernetes.io/projected/e18bd93a-d349-4cc1-86ad-9bd86df1e566-kube-api-access-sfvbz\") pod \"glance-db-create-p7pt9\" (UID: \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\") " pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.277420 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.322048 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae657046-3a10-4db8-936f-ff96a836579f-operator-scripts\") pod \"glance-9e2a-account-create-update-hdhfj\" (UID: \"ae657046-3a10-4db8-936f-ff96a836579f\") " pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.322132 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57qz4\" (UniqueName: \"kubernetes.io/projected/ae657046-3a10-4db8-936f-ff96a836579f-kube-api-access-57qz4\") pod \"glance-9e2a-account-create-update-hdhfj\" (UID: \"ae657046-3a10-4db8-936f-ff96a836579f\") " pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.322815 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae657046-3a10-4db8-936f-ff96a836579f-operator-scripts\") pod \"glance-9e2a-account-create-update-hdhfj\" (UID: \"ae657046-3a10-4db8-936f-ff96a836579f\") " pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.342959 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57qz4\" (UniqueName: \"kubernetes.io/projected/ae657046-3a10-4db8-936f-ff96a836579f-kube-api-access-57qz4\") pod \"glance-9e2a-account-create-update-hdhfj\" (UID: \"ae657046-3a10-4db8-936f-ff96a836579f\") " pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.456440 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.660786 4750 generic.go:334] "Generic (PLEG): container finished" podID="5a5a81c3-f760-45a0-aaff-5a5252f1e4ac" containerID="fba75ea03376eb388794d25cdf3194ae3965eaafb9cca925b748fdfc87942649" exitCode=0 Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.660858 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sttwb" event={"ID":"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac","Type":"ContainerDied","Data":"fba75ea03376eb388794d25cdf3194ae3965eaafb9cca925b748fdfc87942649"} Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.664543 4750 generic.go:334] "Generic (PLEG): container finished" podID="a357c661-43db-45de-af4d-930aa51c9743" containerID="b80c67ab0f3baa5c5b7b2899085c082ccb8b9b0e5cf8d1941b9936862d0ccccf" exitCode=0 Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.664580 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4cmz6" event={"ID":"a357c661-43db-45de-af4d-930aa51c9743","Type":"ContainerDied","Data":"b80c67ab0f3baa5c5b7b2899085c082ccb8b9b0e5cf8d1941b9936862d0ccccf"} Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.665878 4750 generic.go:334] "Generic (PLEG): container finished" podID="0ffd32b9-4e66-43e5-b67c-d6b7a9cce266" containerID="7d84cb1a9eb74fa44e8910e260f94d4842f5b014fe79a398b4d93c875cbafe36" exitCode=0 Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.665933 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3801-account-create-update-vcr52" event={"ID":"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266","Type":"ContainerDied","Data":"7d84cb1a9eb74fa44e8910e260f94d4842f5b014fe79a398b4d93c875cbafe36"} Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.668349 4750 generic.go:334] "Generic (PLEG): container finished" podID="640eb31b-ab86-4684-9b86-9fab7eb5c26a" containerID="63fb58cd16fa2d9fe9099b680abcd9fecaab1bae5536ea7589f5456df45d7604" exitCode=0 Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.668402 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8mdvr" event={"ID":"640eb31b-ab86-4684-9b86-9fab7eb5c26a","Type":"ContainerDied","Data":"63fb58cd16fa2d9fe9099b680abcd9fecaab1bae5536ea7589f5456df45d7604"} Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.669617 4750 generic.go:334] "Generic (PLEG): container finished" podID="ce8ba1fe-164b-4f82-a893-aabf47cbb6f3" containerID="d8c6cbb45bd763e24889e8d28829421fa0053d9bd0e724cd0cb2a41d4371362d" exitCode=0 Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.669662 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" event={"ID":"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3","Type":"ContainerDied","Data":"d8c6cbb45bd763e24889e8d28829421fa0053d9bd0e724cd0cb2a41d4371362d"} Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.670874 4750 generic.go:334] "Generic (PLEG): container finished" podID="bb695f49-57cd-490d-a9ab-208a33015140" containerID="49614988fe67cac50432a075c1d79bef4a10f9f712a266398a8f1b9e2efb5b07" exitCode=0 Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.671059 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" event={"ID":"bb695f49-57cd-490d-a9ab-208a33015140","Type":"ContainerDied","Data":"49614988fe67cac50432a075c1d79bef4a10f9f712a266398a8f1b9e2efb5b07"} Feb 14 14:13:33 crc kubenswrapper[4750]: I0214 14:13:33.825284 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p7pt9"] Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.058126 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9e2a-account-create-update-hdhfj"] Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.211284 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.263831 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a074c53-8712-4bea-a4b9-5af6f3e8daab-operator-scripts\") pod \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\" (UID: \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\") " Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.264041 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7qtf\" (UniqueName: \"kubernetes.io/projected/7a074c53-8712-4bea-a4b9-5af6f3e8daab-kube-api-access-v7qtf\") pod \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\" (UID: \"7a074c53-8712-4bea-a4b9-5af6f3e8daab\") " Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.264479 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a074c53-8712-4bea-a4b9-5af6f3e8daab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a074c53-8712-4bea-a4b9-5af6f3e8daab" (UID: "7a074c53-8712-4bea-a4b9-5af6f3e8daab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.265162 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a074c53-8712-4bea-a4b9-5af6f3e8daab-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.269720 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a074c53-8712-4bea-a4b9-5af6f3e8daab-kube-api-access-v7qtf" (OuterVolumeSpecName: "kube-api-access-v7qtf") pod "7a074c53-8712-4bea-a4b9-5af6f3e8daab" (UID: "7a074c53-8712-4bea-a4b9-5af6f3e8daab"). InnerVolumeSpecName "kube-api-access-v7qtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.367082 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7qtf\" (UniqueName: \"kubernetes.io/projected/7a074c53-8712-4bea-a4b9-5af6f3e8daab-kube-api-access-v7qtf\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.393217 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.681801 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4cmz6" event={"ID":"a357c661-43db-45de-af4d-930aa51c9743","Type":"ContainerStarted","Data":"ecf02edf9e0a678876b704cf639f9fc4021f1a6022b8782719129cee8ebe5260"} Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.682157 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.686094 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerStarted","Data":"497eea6247b9916243db87178dca6c17e6136acdc43c2d05e1240c8250745d28"} Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.688184 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-24be-account-create-update-skxg8" event={"ID":"7a074c53-8712-4bea-a4b9-5af6f3e8daab","Type":"ContainerDied","Data":"f215bf1c0adf18f73a83df2fcbcccc30d2dc93a8272e5c3093c83ae5a970aec8"} Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.688217 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f215bf1c0adf18f73a83df2fcbcccc30d2dc93a8272e5c3093c83ae5a970aec8" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.688293 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-24be-account-create-update-skxg8" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.690887 4750 generic.go:334] "Generic (PLEG): container finished" podID="e18bd93a-d349-4cc1-86ad-9bd86df1e566" containerID="fd193fdad1a4ca2212289d54ede52b4f750c3d8a2cb95787f49ddfe4ab5eeab2" exitCode=0 Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.691029 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p7pt9" event={"ID":"e18bd93a-d349-4cc1-86ad-9bd86df1e566","Type":"ContainerDied","Data":"fd193fdad1a4ca2212289d54ede52b4f750c3d8a2cb95787f49ddfe4ab5eeab2"} Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.691057 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p7pt9" event={"ID":"e18bd93a-d349-4cc1-86ad-9bd86df1e566","Type":"ContainerStarted","Data":"487f1001ec0eb629ad2399fb20f4d22e7ba96173556641f47b92d7cc48b2bd03"} Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.692920 4750 generic.go:334] "Generic (PLEG): container finished" podID="ae657046-3a10-4db8-936f-ff96a836579f" containerID="816220762eaf1d0e0fd24f9eb332f773b8eb341ba01ae3368f7b97047fc602e6" exitCode=0 Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.693090 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9e2a-account-create-update-hdhfj" event={"ID":"ae657046-3a10-4db8-936f-ff96a836579f","Type":"ContainerDied","Data":"816220762eaf1d0e0fd24f9eb332f773b8eb341ba01ae3368f7b97047fc602e6"} Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.693128 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9e2a-account-create-update-hdhfj" event={"ID":"ae657046-3a10-4db8-936f-ff96a836579f","Type":"ContainerStarted","Data":"bb5a3ebb1902d7282234d846a4b6fc9af4f5dc06acb510fc8dfcfb27663089e7"} Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.709466 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-4cmz6" podStartSLOduration=5.7094513639999995 podStartE2EDuration="5.709451364s" podCreationTimestamp="2026-02-14 14:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:13:34.70860753 +0000 UTC m=+1286.734597031" watchObservedRunningTime="2026-02-14 14:13:34.709451364 +0000 UTC m=+1286.735440845" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.860961 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zfgpw"] Feb 14 14:13:34 crc kubenswrapper[4750]: E0214 14:13:34.865182 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a074c53-8712-4bea-a4b9-5af6f3e8daab" containerName="mariadb-account-create-update" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.865223 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a074c53-8712-4bea-a4b9-5af6f3e8daab" containerName="mariadb-account-create-update" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.865527 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a074c53-8712-4bea-a4b9-5af6f3e8daab" containerName="mariadb-account-create-update" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.866482 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.872124 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.876214 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a613a4-e584-4eb2-9e9c-f7e516fcade6-operator-scripts\") pod \"root-account-create-update-zfgpw\" (UID: \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\") " pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.876301 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5lfk\" (UniqueName: \"kubernetes.io/projected/82a613a4-e584-4eb2-9e9c-f7e516fcade6-kube-api-access-r5lfk\") pod \"root-account-create-update-zfgpw\" (UID: \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\") " pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.876443 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:34 crc kubenswrapper[4750]: E0214 14:13:34.876746 4750 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 14:13:34 crc kubenswrapper[4750]: E0214 14:13:34.876769 4750 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 14:13:34 crc kubenswrapper[4750]: E0214 14:13:34.876820 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift podName:e623022c-0cda-4463-b5e1-3157a1f8c1c1 nodeName:}" failed. No retries permitted until 2026-02-14 14:13:38.876802266 +0000 UTC m=+1290.902791747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift") pod "swift-storage-0" (UID: "e623022c-0cda-4463-b5e1-3157a1f8c1c1") : configmap "swift-ring-files" not found Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.892994 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zfgpw"] Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.978423 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5lfk\" (UniqueName: \"kubernetes.io/projected/82a613a4-e584-4eb2-9e9c-f7e516fcade6-kube-api-access-r5lfk\") pod \"root-account-create-update-zfgpw\" (UID: \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\") " pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.978895 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a613a4-e584-4eb2-9e9c-f7e516fcade6-operator-scripts\") pod \"root-account-create-update-zfgpw\" (UID: \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\") " pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:34 crc kubenswrapper[4750]: I0214 14:13:34.979675 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a613a4-e584-4eb2-9e9c-f7e516fcade6-operator-scripts\") pod \"root-account-create-update-zfgpw\" (UID: \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\") " pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.017101 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5lfk\" (UniqueName: \"kubernetes.io/projected/82a613a4-e584-4eb2-9e9c-f7e516fcade6-kube-api-access-r5lfk\") pod \"root-account-create-update-zfgpw\" (UID: \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\") " pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.063364 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-97hvb"] Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.065390 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.069640 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.069883 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.069995 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.074536 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-97hvb"] Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.082407 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-swiftconf\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.082543 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-dispersionconf\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.082572 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4lm7\" (UniqueName: \"kubernetes.io/projected/52f38032-f0a9-42d6-8111-e57596583323-kube-api-access-v4lm7\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.082618 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-scripts\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.082708 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-combined-ca-bundle\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.082769 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f38032-f0a9-42d6-8111-e57596583323-etc-swift\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.082792 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-ring-data-devices\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.120413 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-p2fnm"] Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.121859 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.141497 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-97hvb"] Feb 14 14:13:35 crc kubenswrapper[4750]: E0214 14:13:35.142496 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-v4lm7 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-97hvb" podUID="52f38032-f0a9-42d6-8111-e57596583323" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.150883 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-p2fnm"] Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.185719 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-combined-ca-bundle\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.185807 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cscgd\" (UniqueName: \"kubernetes.io/projected/e6aba344-d824-42da-996e-733b7480a2eb-kube-api-access-cscgd\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.185959 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f38032-f0a9-42d6-8111-e57596583323-etc-swift\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186204 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-dispersionconf\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186262 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-ring-data-devices\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186307 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-scripts\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186357 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-swiftconf\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186385 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-swiftconf\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186420 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-ring-data-devices\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186451 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6aba344-d824-42da-996e-733b7480a2eb-etc-swift\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186482 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-combined-ca-bundle\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186518 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-dispersionconf\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186544 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4lm7\" (UniqueName: \"kubernetes.io/projected/52f38032-f0a9-42d6-8111-e57596583323-kube-api-access-v4lm7\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186915 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-scripts\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.186932 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f38032-f0a9-42d6-8111-e57596583323-etc-swift\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.187401 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-ring-data-devices\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.187490 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-scripts\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.190989 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-dispersionconf\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.191905 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.192830 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-combined-ca-bundle\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.204979 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4lm7\" (UniqueName: \"kubernetes.io/projected/52f38032-f0a9-42d6-8111-e57596583323-kube-api-access-v4lm7\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.215933 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-swiftconf\") pod \"swift-ring-rebalance-97hvb\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.291176 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-ring-data-devices\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.291225 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6aba344-d824-42da-996e-733b7480a2eb-etc-swift\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.291259 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-combined-ca-bundle\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.291367 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cscgd\" (UniqueName: \"kubernetes.io/projected/e6aba344-d824-42da-996e-733b7480a2eb-kube-api-access-cscgd\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.291398 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-dispersionconf\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.291427 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-scripts\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.291470 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-swiftconf\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.292438 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-ring-data-devices\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.292549 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6aba344-d824-42da-996e-733b7480a2eb-etc-swift\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.293478 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-scripts\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.295515 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-swiftconf\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.297431 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-dispersionconf\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.297892 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-combined-ca-bundle\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.317694 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cscgd\" (UniqueName: \"kubernetes.io/projected/e6aba344-d824-42da-996e-733b7480a2eb-kube-api-access-cscgd\") pod \"swift-ring-rebalance-p2fnm\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.366411 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.392682 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jwvk\" (UniqueName: \"kubernetes.io/projected/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-kube-api-access-6jwvk\") pod \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\" (UID: \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.393052 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-operator-scripts\") pod \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\" (UID: \"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.397500 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-kube-api-access-6jwvk" (OuterVolumeSpecName: "kube-api-access-6jwvk") pod "ce8ba1fe-164b-4f82-a893-aabf47cbb6f3" (UID: "ce8ba1fe-164b-4f82-a893-aabf47cbb6f3"). InnerVolumeSpecName "kube-api-access-6jwvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.405141 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce8ba1fe-164b-4f82-a893-aabf47cbb6f3" (UID: "ce8ba1fe-164b-4f82-a893-aabf47cbb6f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.445232 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.496016 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jwvk\" (UniqueName: \"kubernetes.io/projected/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-kube-api-access-6jwvk\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.496045 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.546594 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.554883 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sttwb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.586790 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.598211 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46r2\" (UniqueName: \"kubernetes.io/projected/bb695f49-57cd-490d-a9ab-208a33015140-kube-api-access-j46r2\") pod \"bb695f49-57cd-490d-a9ab-208a33015140\" (UID: \"bb695f49-57cd-490d-a9ab-208a33015140\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.598498 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4mtc\" (UniqueName: \"kubernetes.io/projected/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-kube-api-access-w4mtc\") pod \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\" (UID: \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.598576 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwx5m\" (UniqueName: \"kubernetes.io/projected/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-kube-api-access-bwx5m\") pod \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\" (UID: \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.598686 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-operator-scripts\") pod \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\" (UID: \"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.598909 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-operator-scripts\") pod \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\" (UID: \"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.599000 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb695f49-57cd-490d-a9ab-208a33015140-operator-scripts\") pod \"bb695f49-57cd-490d-a9ab-208a33015140\" (UID: \"bb695f49-57cd-490d-a9ab-208a33015140\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.599188 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.599652 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ffd32b9-4e66-43e5-b67c-d6b7a9cce266" (UID: "0ffd32b9-4e66-43e5-b67c-d6b7a9cce266"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.600049 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb695f49-57cd-490d-a9ab-208a33015140-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb695f49-57cd-490d-a9ab-208a33015140" (UID: "bb695f49-57cd-490d-a9ab-208a33015140"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.600531 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a5a81c3-f760-45a0-aaff-5a5252f1e4ac" (UID: "5a5a81c3-f760-45a0-aaff-5a5252f1e4ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.601808 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.601830 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.601839 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb695f49-57cd-490d-a9ab-208a33015140-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.608377 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-kube-api-access-bwx5m" (OuterVolumeSpecName: "kube-api-access-bwx5m") pod "5a5a81c3-f760-45a0-aaff-5a5252f1e4ac" (UID: "5a5a81c3-f760-45a0-aaff-5a5252f1e4ac"). InnerVolumeSpecName "kube-api-access-bwx5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.608432 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-kube-api-access-w4mtc" (OuterVolumeSpecName: "kube-api-access-w4mtc") pod "0ffd32b9-4e66-43e5-b67c-d6b7a9cce266" (UID: "0ffd32b9-4e66-43e5-b67c-d6b7a9cce266"). InnerVolumeSpecName "kube-api-access-w4mtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.609786 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb695f49-57cd-490d-a9ab-208a33015140-kube-api-access-j46r2" (OuterVolumeSpecName: "kube-api-access-j46r2") pod "bb695f49-57cd-490d-a9ab-208a33015140" (UID: "bb695f49-57cd-490d-a9ab-208a33015140"). InnerVolumeSpecName "kube-api-access-j46r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.705451 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnhm6\" (UniqueName: \"kubernetes.io/projected/640eb31b-ab86-4684-9b86-9fab7eb5c26a-kube-api-access-tnhm6\") pod \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\" (UID: \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.705564 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/640eb31b-ab86-4684-9b86-9fab7eb5c26a-operator-scripts\") pod \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\" (UID: \"640eb31b-ab86-4684-9b86-9fab7eb5c26a\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.706770 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/640eb31b-ab86-4684-9b86-9fab7eb5c26a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "640eb31b-ab86-4684-9b86-9fab7eb5c26a" (UID: "640eb31b-ab86-4684-9b86-9fab7eb5c26a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.706908 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j46r2\" (UniqueName: \"kubernetes.io/projected/bb695f49-57cd-490d-a9ab-208a33015140-kube-api-access-j46r2\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.706952 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4mtc\" (UniqueName: \"kubernetes.io/projected/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266-kube-api-access-w4mtc\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.706972 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwx5m\" (UniqueName: \"kubernetes.io/projected/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac-kube-api-access-bwx5m\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.715381 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sttwb" event={"ID":"5a5a81c3-f760-45a0-aaff-5a5252f1e4ac","Type":"ContainerDied","Data":"c4148a61427aca5edb3f2dac205fb11ef110befe8290581df0e456427100c6e5"} Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.715416 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sttwb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.715425 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4148a61427aca5edb3f2dac205fb11ef110befe8290581df0e456427100c6e5" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.716890 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/640eb31b-ab86-4684-9b86-9fab7eb5c26a-kube-api-access-tnhm6" (OuterVolumeSpecName: "kube-api-access-tnhm6") pod "640eb31b-ab86-4684-9b86-9fab7eb5c26a" (UID: "640eb31b-ab86-4684-9b86-9fab7eb5c26a"). InnerVolumeSpecName "kube-api-access-tnhm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.717898 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3801-account-create-update-vcr52" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.717892 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3801-account-create-update-vcr52" event={"ID":"0ffd32b9-4e66-43e5-b67c-d6b7a9cce266","Type":"ContainerDied","Data":"b0f3333a8afadf656d08dabec4aeef494d1a3d49e7e840366315f317bb4e13fc"} Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.718187 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0f3333a8afadf656d08dabec4aeef494d1a3d49e7e840366315f317bb4e13fc" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.722463 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8mdvr" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.723977 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8mdvr" event={"ID":"640eb31b-ab86-4684-9b86-9fab7eb5c26a","Type":"ContainerDied","Data":"f6ebae10c986e79e08269c7c6c842a8693b225d5b56768d4c50ee404925d40f7"} Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.724014 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ebae10c986e79e08269c7c6c842a8693b225d5b56768d4c50ee404925d40f7" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.726781 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" event={"ID":"ce8ba1fe-164b-4f82-a893-aabf47cbb6f3","Type":"ContainerDied","Data":"93fcc651b2c9bfec0d19505e554b0e08616de2c8c7be24a2a41126e03d139594"} Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.726801 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93fcc651b2c9bfec0d19505e554b0e08616de2c8c7be24a2a41126e03d139594" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.726886 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bjdpv" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.743341 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" event={"ID":"bb695f49-57cd-490d-a9ab-208a33015140","Type":"ContainerDied","Data":"3440a1c74ae7e115876e7e1f6e9bdf6395d065b3e6f4685db83a1a986a61584b"} Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.743376 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3440a1c74ae7e115876e7e1f6e9bdf6395d065b3e6f4685db83a1a986a61584b" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.743374 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.743376 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d1e0-account-create-update-mrqds" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.808845 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnhm6\" (UniqueName: \"kubernetes.io/projected/640eb31b-ab86-4684-9b86-9fab7eb5c26a-kube-api-access-tnhm6\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.809174 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/640eb31b-ab86-4684-9b86-9fab7eb5c26a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.856877 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.880919 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zfgpw"] Feb 14 14:13:35 crc kubenswrapper[4750]: E0214 14:13:35.882934 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce8ba1fe_164b_4f82_a893_aabf47cbb6f3.slice\": RecentStats: unable to find data in memory cache]" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.910197 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f38032-f0a9-42d6-8111-e57596583323-etc-swift\") pod \"52f38032-f0a9-42d6-8111-e57596583323\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.910258 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-combined-ca-bundle\") pod \"52f38032-f0a9-42d6-8111-e57596583323\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.910383 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-ring-data-devices\") pod \"52f38032-f0a9-42d6-8111-e57596583323\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.910449 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-swiftconf\") pod \"52f38032-f0a9-42d6-8111-e57596583323\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.910553 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-scripts\") pod \"52f38032-f0a9-42d6-8111-e57596583323\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.910603 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4lm7\" (UniqueName: \"kubernetes.io/projected/52f38032-f0a9-42d6-8111-e57596583323-kube-api-access-v4lm7\") pod \"52f38032-f0a9-42d6-8111-e57596583323\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.910691 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-dispersionconf\") pod \"52f38032-f0a9-42d6-8111-e57596583323\" (UID: \"52f38032-f0a9-42d6-8111-e57596583323\") " Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.910840 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "52f38032-f0a9-42d6-8111-e57596583323" (UID: "52f38032-f0a9-42d6-8111-e57596583323"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.911171 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-scripts" (OuterVolumeSpecName: "scripts") pod "52f38032-f0a9-42d6-8111-e57596583323" (UID: "52f38032-f0a9-42d6-8111-e57596583323"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.911543 4750 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.911561 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f38032-f0a9-42d6-8111-e57596583323-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.911891 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52f38032-f0a9-42d6-8111-e57596583323-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "52f38032-f0a9-42d6-8111-e57596583323" (UID: "52f38032-f0a9-42d6-8111-e57596583323"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.914734 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52f38032-f0a9-42d6-8111-e57596583323" (UID: "52f38032-f0a9-42d6-8111-e57596583323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.915264 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "52f38032-f0a9-42d6-8111-e57596583323" (UID: "52f38032-f0a9-42d6-8111-e57596583323"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.924273 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "52f38032-f0a9-42d6-8111-e57596583323" (UID: "52f38032-f0a9-42d6-8111-e57596583323"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:35 crc kubenswrapper[4750]: I0214 14:13:35.924286 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f38032-f0a9-42d6-8111-e57596583323-kube-api-access-v4lm7" (OuterVolumeSpecName: "kube-api-access-v4lm7") pod "52f38032-f0a9-42d6-8111-e57596583323" (UID: "52f38032-f0a9-42d6-8111-e57596583323"). InnerVolumeSpecName "kube-api-access-v4lm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.014712 4750 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f38032-f0a9-42d6-8111-e57596583323-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.014746 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.014756 4750 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.014765 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4lm7\" (UniqueName: \"kubernetes.io/projected/52f38032-f0a9-42d6-8111-e57596583323-kube-api-access-v4lm7\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.014773 4750 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f38032-f0a9-42d6-8111-e57596583323-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.039806 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-p2fnm"] Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.210144 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.222573 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfvbz\" (UniqueName: \"kubernetes.io/projected/e18bd93a-d349-4cc1-86ad-9bd86df1e566-kube-api-access-sfvbz\") pod \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\" (UID: \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\") " Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.224746 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18bd93a-d349-4cc1-86ad-9bd86df1e566-operator-scripts\") pod \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\" (UID: \"e18bd93a-d349-4cc1-86ad-9bd86df1e566\") " Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.225754 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e18bd93a-d349-4cc1-86ad-9bd86df1e566-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e18bd93a-d349-4cc1-86ad-9bd86df1e566" (UID: "e18bd93a-d349-4cc1-86ad-9bd86df1e566"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.227045 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18bd93a-d349-4cc1-86ad-9bd86df1e566-kube-api-access-sfvbz" (OuterVolumeSpecName: "kube-api-access-sfvbz") pod "e18bd93a-d349-4cc1-86ad-9bd86df1e566" (UID: "e18bd93a-d349-4cc1-86ad-9bd86df1e566"). InnerVolumeSpecName "kube-api-access-sfvbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.231356 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.326750 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57qz4\" (UniqueName: \"kubernetes.io/projected/ae657046-3a10-4db8-936f-ff96a836579f-kube-api-access-57qz4\") pod \"ae657046-3a10-4db8-936f-ff96a836579f\" (UID: \"ae657046-3a10-4db8-936f-ff96a836579f\") " Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.327021 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae657046-3a10-4db8-936f-ff96a836579f-operator-scripts\") pod \"ae657046-3a10-4db8-936f-ff96a836579f\" (UID: \"ae657046-3a10-4db8-936f-ff96a836579f\") " Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.327425 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae657046-3a10-4db8-936f-ff96a836579f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae657046-3a10-4db8-936f-ff96a836579f" (UID: "ae657046-3a10-4db8-936f-ff96a836579f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.328009 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18bd93a-d349-4cc1-86ad-9bd86df1e566-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.328090 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae657046-3a10-4db8-936f-ff96a836579f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.328193 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfvbz\" (UniqueName: \"kubernetes.io/projected/e18bd93a-d349-4cc1-86ad-9bd86df1e566-kube-api-access-sfvbz\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.333231 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae657046-3a10-4db8-936f-ff96a836579f-kube-api-access-57qz4" (OuterVolumeSpecName: "kube-api-access-57qz4") pod "ae657046-3a10-4db8-936f-ff96a836579f" (UID: "ae657046-3a10-4db8-936f-ff96a836579f"). InnerVolumeSpecName "kube-api-access-57qz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.430276 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57qz4\" (UniqueName: \"kubernetes.io/projected/ae657046-3a10-4db8-936f-ff96a836579f-kube-api-access-57qz4\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.755425 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p7pt9" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.756716 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e2a-account-create-update-hdhfj" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.757802 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-97hvb" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.774512 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zfgpw" event={"ID":"82a613a4-e584-4eb2-9e9c-f7e516fcade6","Type":"ContainerStarted","Data":"d05a287a847a88519b77183927024a040ba2287ae14883a9996d70aec14b6cd0"} Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.774562 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zfgpw" event={"ID":"82a613a4-e584-4eb2-9e9c-f7e516fcade6","Type":"ContainerStarted","Data":"450244147b60b896e41a3f0a14ee4025e77619a5fec87cafb329d34978294aed"} Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.774582 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p7pt9" event={"ID":"e18bd93a-d349-4cc1-86ad-9bd86df1e566","Type":"ContainerDied","Data":"487f1001ec0eb629ad2399fb20f4d22e7ba96173556641f47b92d7cc48b2bd03"} Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.774599 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="487f1001ec0eb629ad2399fb20f4d22e7ba96173556641f47b92d7cc48b2bd03" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.774612 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9e2a-account-create-update-hdhfj" event={"ID":"ae657046-3a10-4db8-936f-ff96a836579f","Type":"ContainerDied","Data":"bb5a3ebb1902d7282234d846a4b6fc9af4f5dc06acb510fc8dfcfb27663089e7"} Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.774623 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb5a3ebb1902d7282234d846a4b6fc9af4f5dc06acb510fc8dfcfb27663089e7" Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.774633 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p2fnm" event={"ID":"e6aba344-d824-42da-996e-733b7480a2eb","Type":"ContainerStarted","Data":"4c23fb56850f47fbbdedaac77587f612e9f074f523132ee0643d1a566e55589d"} Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.848223 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-97hvb"] Feb 14 14:13:36 crc kubenswrapper[4750]: I0214 14:13:36.867407 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-97hvb"] Feb 14 14:13:37 crc kubenswrapper[4750]: I0214 14:13:37.370056 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f459dfbdf-9lrz2" podUID="fae44155-c5f4-49f1-ab15-1365f1c77e0b" containerName="console" containerID="cri-o://c88c212a9c0bf838f69e85f975167886c321d0454cadc47dea2ff7434fa0b108" gracePeriod=15 Feb 14 14:13:37 crc kubenswrapper[4750]: I0214 14:13:37.786911 4750 generic.go:334] "Generic (PLEG): container finished" podID="82a613a4-e584-4eb2-9e9c-f7e516fcade6" containerID="d05a287a847a88519b77183927024a040ba2287ae14883a9996d70aec14b6cd0" exitCode=0 Feb 14 14:13:37 crc kubenswrapper[4750]: I0214 14:13:37.787020 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zfgpw" event={"ID":"82a613a4-e584-4eb2-9e9c-f7e516fcade6","Type":"ContainerDied","Data":"d05a287a847a88519b77183927024a040ba2287ae14883a9996d70aec14b6cd0"} Feb 14 14:13:37 crc kubenswrapper[4750]: I0214 14:13:37.793077 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f459dfbdf-9lrz2_fae44155-c5f4-49f1-ab15-1365f1c77e0b/console/0.log" Feb 14 14:13:37 crc kubenswrapper[4750]: I0214 14:13:37.793141 4750 generic.go:334] "Generic (PLEG): container finished" podID="fae44155-c5f4-49f1-ab15-1365f1c77e0b" containerID="c88c212a9c0bf838f69e85f975167886c321d0454cadc47dea2ff7434fa0b108" exitCode=2 Feb 14 14:13:37 crc kubenswrapper[4750]: I0214 14:13:37.793178 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f459dfbdf-9lrz2" event={"ID":"fae44155-c5f4-49f1-ab15-1365f1c77e0b","Type":"ContainerDied","Data":"c88c212a9c0bf838f69e85f975167886c321d0454cadc47dea2ff7434fa0b108"} Feb 14 14:13:37 crc kubenswrapper[4750]: I0214 14:13:37.927570 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f459dfbdf-9lrz2_fae44155-c5f4-49f1-ab15-1365f1c77e0b/console/0.log" Feb 14 14:13:37 crc kubenswrapper[4750]: I0214 14:13:37.927910 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.067570 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-config\") pod \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.067628 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-trusted-ca-bundle\") pod \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.067844 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-oauth-serving-cert\") pod \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.067900 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scxxv\" (UniqueName: \"kubernetes.io/projected/fae44155-c5f4-49f1-ab15-1365f1c77e0b-kube-api-access-scxxv\") pod \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.067990 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-service-ca\") pod \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.068053 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-oauth-config\") pod \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.068087 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-serving-cert\") pod \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\" (UID: \"fae44155-c5f4-49f1-ab15-1365f1c77e0b\") " Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.076009 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-service-ca" (OuterVolumeSpecName: "service-ca") pod "fae44155-c5f4-49f1-ab15-1365f1c77e0b" (UID: "fae44155-c5f4-49f1-ab15-1365f1c77e0b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.076042 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-config" (OuterVolumeSpecName: "console-config") pod "fae44155-c5f4-49f1-ab15-1365f1c77e0b" (UID: "fae44155-c5f4-49f1-ab15-1365f1c77e0b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.076136 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fae44155-c5f4-49f1-ab15-1365f1c77e0b" (UID: "fae44155-c5f4-49f1-ab15-1365f1c77e0b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.076150 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fae44155-c5f4-49f1-ab15-1365f1c77e0b" (UID: "fae44155-c5f4-49f1-ab15-1365f1c77e0b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.079765 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fae44155-c5f4-49f1-ab15-1365f1c77e0b" (UID: "fae44155-c5f4-49f1-ab15-1365f1c77e0b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.084791 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fae44155-c5f4-49f1-ab15-1365f1c77e0b" (UID: "fae44155-c5f4-49f1-ab15-1365f1c77e0b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.084864 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae44155-c5f4-49f1-ab15-1365f1c77e0b-kube-api-access-scxxv" (OuterVolumeSpecName: "kube-api-access-scxxv") pod "fae44155-c5f4-49f1-ab15-1365f1c77e0b" (UID: "fae44155-c5f4-49f1-ab15-1365f1c77e0b"). InnerVolumeSpecName "kube-api-access-scxxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.170692 4750 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.170725 4750 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.170733 4750 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.170742 4750 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.170752 4750 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.170759 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scxxv\" (UniqueName: \"kubernetes.io/projected/fae44155-c5f4-49f1-ab15-1365f1c77e0b-kube-api-access-scxxv\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.170768 4750 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fae44155-c5f4-49f1-ab15-1365f1c77e0b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.265598 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-r6mf6"] Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.266207 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae657046-3a10-4db8-936f-ff96a836579f" containerName="mariadb-account-create-update" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.266312 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae657046-3a10-4db8-936f-ff96a836579f" containerName="mariadb-account-create-update" Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.266375 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb695f49-57cd-490d-a9ab-208a33015140" containerName="mariadb-account-create-update" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.266429 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb695f49-57cd-490d-a9ab-208a33015140" containerName="mariadb-account-create-update" Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.266487 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffd32b9-4e66-43e5-b67c-d6b7a9cce266" containerName="mariadb-account-create-update" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.266536 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffd32b9-4e66-43e5-b67c-d6b7a9cce266" containerName="mariadb-account-create-update" Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.266599 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5a81c3-f760-45a0-aaff-5a5252f1e4ac" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.266647 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5a81c3-f760-45a0-aaff-5a5252f1e4ac" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.266701 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae44155-c5f4-49f1-ab15-1365f1c77e0b" containerName="console" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.266753 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae44155-c5f4-49f1-ab15-1365f1c77e0b" containerName="console" Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.266809 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8ba1fe-164b-4f82-a893-aabf47cbb6f3" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267038 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8ba1fe-164b-4f82-a893-aabf47cbb6f3" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.267099 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="640eb31b-ab86-4684-9b86-9fab7eb5c26a" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267163 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="640eb31b-ab86-4684-9b86-9fab7eb5c26a" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.267213 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18bd93a-d349-4cc1-86ad-9bd86df1e566" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267260 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18bd93a-d349-4cc1-86ad-9bd86df1e566" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267504 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8ba1fe-164b-4f82-a893-aabf47cbb6f3" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267575 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb695f49-57cd-490d-a9ab-208a33015140" containerName="mariadb-account-create-update" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267635 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae657046-3a10-4db8-936f-ff96a836579f" containerName="mariadb-account-create-update" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267700 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffd32b9-4e66-43e5-b67c-d6b7a9cce266" containerName="mariadb-account-create-update" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267761 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5a81c3-f760-45a0-aaff-5a5252f1e4ac" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267831 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18bd93a-d349-4cc1-86ad-9bd86df1e566" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267888 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae44155-c5f4-49f1-ab15-1365f1c77e0b" containerName="console" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.267939 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="640eb31b-ab86-4684-9b86-9fab7eb5c26a" containerName="mariadb-database-create" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.268708 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.271440 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-66gxs" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.272428 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.308280 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r6mf6"] Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.374449 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-config-data\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.374512 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-db-sync-config-data\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.374635 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-combined-ca-bundle\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.374692 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crv7d\" (UniqueName: \"kubernetes.io/projected/4c9f7486-cff2-4a61-8c5e-c71977aab921-kube-api-access-crv7d\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.478378 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-combined-ca-bundle\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.478550 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crv7d\" (UniqueName: \"kubernetes.io/projected/4c9f7486-cff2-4a61-8c5e-c71977aab921-kube-api-access-crv7d\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.479003 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-config-data\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.479140 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-db-sync-config-data\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.487844 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-db-sync-config-data\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.487958 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-combined-ca-bundle\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.488192 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-config-data\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.495434 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crv7d\" (UniqueName: \"kubernetes.io/projected/4c9f7486-cff2-4a61-8c5e-c71977aab921-kube-api-access-crv7d\") pod \"glance-db-sync-r6mf6\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.713960 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6mf6" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.754933 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f38032-f0a9-42d6-8111-e57596583323" path="/var/lib/kubelet/pods/52f38032-f0a9-42d6-8111-e57596583323/volumes" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.807980 4750 generic.go:334] "Generic (PLEG): container finished" podID="9ccd0d36-6fed-4aeb-b811-28cf48001750" containerID="0fccd90bee9500e05f117ba916dd6afb5f3593b6429c3514da328a678356bd23" exitCode=0 Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.808029 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9ccd0d36-6fed-4aeb-b811-28cf48001750","Type":"ContainerDied","Data":"0fccd90bee9500e05f117ba916dd6afb5f3593b6429c3514da328a678356bd23"} Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.809928 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f459dfbdf-9lrz2_fae44155-c5f4-49f1-ab15-1365f1c77e0b/console/0.log" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.809985 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f459dfbdf-9lrz2" event={"ID":"fae44155-c5f4-49f1-ab15-1365f1c77e0b","Type":"ContainerDied","Data":"2e42e50c09139523bfcbb6bccc8afa76df46ffcbec2c4e6c0d6e6fb247851a8f"} Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.810017 4750 scope.go:117] "RemoveContainer" containerID="c88c212a9c0bf838f69e85f975167886c321d0454cadc47dea2ff7434fa0b108" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.810142 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f459dfbdf-9lrz2" Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.811732 4750 generic.go:334] "Generic (PLEG): container finished" podID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerID="9199a836dc535c97ea00db338dd3c3eb706b1efaadba65e0b4e87bbe403356f1" exitCode=0 Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.811795 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8775619e-e8b9-4bee-987c-2cba92f6fcf3","Type":"ContainerDied","Data":"9199a836dc535c97ea00db338dd3c3eb706b1efaadba65e0b4e87bbe403356f1"} Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.815591 4750 generic.go:334] "Generic (PLEG): container finished" podID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerID="f621366506caeb02dee9c6bde64abfe76f9b07ca0844ab51f1565b84455dc23c" exitCode=0 Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.815697 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f513b3ee-aa21-48f3-b5fa-395f0557292a","Type":"ContainerDied","Data":"f621366506caeb02dee9c6bde64abfe76f9b07ca0844ab51f1565b84455dc23c"} Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.819515 4750 generic.go:334] "Generic (PLEG): container finished" podID="cb0d3b08-53c1-4396-9413-bf581fad715a" containerID="9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb" exitCode=0 Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.819706 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cb0d3b08-53c1-4396-9413-bf581fad715a","Type":"ContainerDied","Data":"9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb"} Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.895804 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.895963 4750 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.895978 4750 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 14:13:38 crc kubenswrapper[4750]: E0214 14:13:38.896013 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift podName:e623022c-0cda-4463-b5e1-3157a1f8c1c1 nodeName:}" failed. No retries permitted until 2026-02-14 14:13:46.896001044 +0000 UTC m=+1298.921990525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift") pod "swift-storage-0" (UID: "e623022c-0cda-4463-b5e1-3157a1f8c1c1") : configmap "swift-ring-files" not found Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.904715 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f459dfbdf-9lrz2"] Feb 14 14:13:38 crc kubenswrapper[4750]: I0214 14:13:38.922552 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f459dfbdf-9lrz2"] Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.071397 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.221841 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a613a4-e584-4eb2-9e9c-f7e516fcade6-operator-scripts\") pod \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\" (UID: \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\") " Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.222045 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5lfk\" (UniqueName: \"kubernetes.io/projected/82a613a4-e584-4eb2-9e9c-f7e516fcade6-kube-api-access-r5lfk\") pod \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\" (UID: \"82a613a4-e584-4eb2-9e9c-f7e516fcade6\") " Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.223638 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82a613a4-e584-4eb2-9e9c-f7e516fcade6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82a613a4-e584-4eb2-9e9c-f7e516fcade6" (UID: "82a613a4-e584-4eb2-9e9c-f7e516fcade6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.240920 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a613a4-e584-4eb2-9e9c-f7e516fcade6-kube-api-access-r5lfk" (OuterVolumeSpecName: "kube-api-access-r5lfk") pod "82a613a4-e584-4eb2-9e9c-f7e516fcade6" (UID: "82a613a4-e584-4eb2-9e9c-f7e516fcade6"). InnerVolumeSpecName "kube-api-access-r5lfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.331106 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5lfk\" (UniqueName: \"kubernetes.io/projected/82a613a4-e584-4eb2-9e9c-f7e516fcade6-kube-api-access-r5lfk\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.331153 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a613a4-e584-4eb2-9e9c-f7e516fcade6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.397006 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.486051 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5gtmt"] Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.486336 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" podUID="d44aab2e-5a28-47fc-8598-d3b6f0d0729c" containerName="dnsmasq-dns" containerID="cri-o://f537f18a6824f8433961050c19bccd2521790acfe515efe9907e5f46d40c8721" gracePeriod=10 Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.544863 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx"] Feb 14 14:13:40 crc kubenswrapper[4750]: E0214 14:13:40.545329 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a613a4-e584-4eb2-9e9c-f7e516fcade6" containerName="mariadb-account-create-update" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.545345 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a613a4-e584-4eb2-9e9c-f7e516fcade6" containerName="mariadb-account-create-update" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.545527 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a613a4-e584-4eb2-9e9c-f7e516fcade6" containerName="mariadb-account-create-update" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.546232 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.559443 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx"] Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.645369 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/441c9e61-7af7-44d7-825f-cab61e06ad3a-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-bb7vx\" (UID: \"441c9e61-7af7-44d7-825f-cab61e06ad3a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.645428 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84lr\" (UniqueName: \"kubernetes.io/projected/441c9e61-7af7-44d7-825f-cab61e06ad3a-kube-api-access-x84lr\") pod \"mysqld-exporter-openstack-cell1-db-create-bb7vx\" (UID: \"441c9e61-7af7-44d7-825f-cab61e06ad3a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.748275 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/441c9e61-7af7-44d7-825f-cab61e06ad3a-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-bb7vx\" (UID: \"441c9e61-7af7-44d7-825f-cab61e06ad3a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.748329 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x84lr\" (UniqueName: \"kubernetes.io/projected/441c9e61-7af7-44d7-825f-cab61e06ad3a-kube-api-access-x84lr\") pod \"mysqld-exporter-openstack-cell1-db-create-bb7vx\" (UID: \"441c9e61-7af7-44d7-825f-cab61e06ad3a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.749272 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/441c9e61-7af7-44d7-825f-cab61e06ad3a-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-bb7vx\" (UID: \"441c9e61-7af7-44d7-825f-cab61e06ad3a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.760560 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae44155-c5f4-49f1-ab15-1365f1c77e0b" path="/var/lib/kubelet/pods/fae44155-c5f4-49f1-ab15-1365f1c77e0b/volumes" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.777123 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-2462-account-create-update-2ctnw"] Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.778431 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.783339 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.788712 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-2462-account-create-update-2ctnw"] Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.820909 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x84lr\" (UniqueName: \"kubernetes.io/projected/441c9e61-7af7-44d7-825f-cab61e06ad3a-kube-api-access-x84lr\") pod \"mysqld-exporter-openstack-cell1-db-create-bb7vx\" (UID: \"441c9e61-7af7-44d7-825f-cab61e06ad3a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.846430 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zfgpw" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.847225 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zfgpw" event={"ID":"82a613a4-e584-4eb2-9e9c-f7e516fcade6","Type":"ContainerDied","Data":"450244147b60b896e41a3f0a14ee4025e77619a5fec87cafb329d34978294aed"} Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.847251 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="450244147b60b896e41a3f0a14ee4025e77619a5fec87cafb329d34978294aed" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.850140 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c1a2e7-75be-464d-96d5-21957aecf6ae-operator-scripts\") pod \"mysqld-exporter-2462-account-create-update-2ctnw\" (UID: \"29c1a2e7-75be-464d-96d5-21957aecf6ae\") " pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.850183 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9pbr\" (UniqueName: \"kubernetes.io/projected/29c1a2e7-75be-464d-96d5-21957aecf6ae-kube-api-access-c9pbr\") pod \"mysqld-exporter-2462-account-create-update-2ctnw\" (UID: \"29c1a2e7-75be-464d-96d5-21957aecf6ae\") " pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.859814 4750 generic.go:334] "Generic (PLEG): container finished" podID="d44aab2e-5a28-47fc-8598-d3b6f0d0729c" containerID="f537f18a6824f8433961050c19bccd2521790acfe515efe9907e5f46d40c8721" exitCode=0 Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.859859 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" event={"ID":"d44aab2e-5a28-47fc-8598-d3b6f0d0729c","Type":"ContainerDied","Data":"f537f18a6824f8433961050c19bccd2521790acfe515efe9907e5f46d40c8721"} Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.865484 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.951816 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c1a2e7-75be-464d-96d5-21957aecf6ae-operator-scripts\") pod \"mysqld-exporter-2462-account-create-update-2ctnw\" (UID: \"29c1a2e7-75be-464d-96d5-21957aecf6ae\") " pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.952164 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9pbr\" (UniqueName: \"kubernetes.io/projected/29c1a2e7-75be-464d-96d5-21957aecf6ae-kube-api-access-c9pbr\") pod \"mysqld-exporter-2462-account-create-update-2ctnw\" (UID: \"29c1a2e7-75be-464d-96d5-21957aecf6ae\") " pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.952530 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c1a2e7-75be-464d-96d5-21957aecf6ae-operator-scripts\") pod \"mysqld-exporter-2462-account-create-update-2ctnw\" (UID: \"29c1a2e7-75be-464d-96d5-21957aecf6ae\") " pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:40 crc kubenswrapper[4750]: I0214 14:13:40.979633 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9pbr\" (UniqueName: \"kubernetes.io/projected/29c1a2e7-75be-464d-96d5-21957aecf6ae-kube-api-access-c9pbr\") pod \"mysqld-exporter-2462-account-create-update-2ctnw\" (UID: \"29c1a2e7-75be-464d-96d5-21957aecf6ae\") " pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:41 crc kubenswrapper[4750]: I0214 14:13:41.101666 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:41 crc kubenswrapper[4750]: I0214 14:13:41.151644 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zfgpw"] Feb 14 14:13:41 crc kubenswrapper[4750]: I0214 14:13:41.161298 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zfgpw"] Feb 14 14:13:41 crc kubenswrapper[4750]: I0214 14:13:41.197311 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.130230 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.285186 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dp5b\" (UniqueName: \"kubernetes.io/projected/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-kube-api-access-4dp5b\") pod \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.285238 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-config\") pod \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.285329 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-dns-svc\") pod \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\" (UID: \"d44aab2e-5a28-47fc-8598-d3b6f0d0729c\") " Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.292052 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-kube-api-access-4dp5b" (OuterVolumeSpecName: "kube-api-access-4dp5b") pod "d44aab2e-5a28-47fc-8598-d3b6f0d0729c" (UID: "d44aab2e-5a28-47fc-8598-d3b6f0d0729c"). InnerVolumeSpecName "kube-api-access-4dp5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.339745 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d44aab2e-5a28-47fc-8598-d3b6f0d0729c" (UID: "d44aab2e-5a28-47fc-8598-d3b6f0d0729c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.390789 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-config" (OuterVolumeSpecName: "config") pod "d44aab2e-5a28-47fc-8598-d3b6f0d0729c" (UID: "d44aab2e-5a28-47fc-8598-d3b6f0d0729c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.401523 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.401556 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dp5b\" (UniqueName: \"kubernetes.io/projected/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-kube-api-access-4dp5b\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.401567 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44aab2e-5a28-47fc-8598-d3b6f0d0729c-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.456511 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx"] Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.488231 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r6mf6"] Feb 14 14:13:42 crc kubenswrapper[4750]: W0214 14:13:42.489322 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c9f7486_cff2_4a61_8c5e_c71977aab921.slice/crio-146471f1a834cf4e746ece1784c93f07ff4278b2e148dcaba0386d01defdf766 WatchSource:0}: Error finding container 146471f1a834cf4e746ece1784c93f07ff4278b2e148dcaba0386d01defdf766: Status 404 returned error can't find the container with id 146471f1a834cf4e746ece1784c93f07ff4278b2e148dcaba0386d01defdf766 Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.626399 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-2462-account-create-update-2ctnw"] Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.755127 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a613a4-e584-4eb2-9e9c-f7e516fcade6" path="/var/lib/kubelet/pods/82a613a4-e584-4eb2-9e9c-f7e516fcade6/volumes" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.920593 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6mf6" event={"ID":"4c9f7486-cff2-4a61-8c5e-c71977aab921","Type":"ContainerStarted","Data":"146471f1a834cf4e746ece1784c93f07ff4278b2e148dcaba0386d01defdf766"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.923236 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" event={"ID":"441c9e61-7af7-44d7-825f-cab61e06ad3a","Type":"ContainerStarted","Data":"023cbcf05fae49b7fe7145f0f1570636ff432499b07f916d6574efc7dc103eec"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.925132 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p2fnm" event={"ID":"e6aba344-d824-42da-996e-733b7480a2eb","Type":"ContainerStarted","Data":"23b0cd9d5bdd18d0b2f464e28953d1d981f710ab2da396d8b8499f2ce01e8be7"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.926559 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" event={"ID":"29c1a2e7-75be-464d-96d5-21957aecf6ae","Type":"ContainerStarted","Data":"057fe1657da690a39f75a707d165e8e0f2765eb1234f6d00628d0ecd4bfda480"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.938801 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerStarted","Data":"8366780ea09f93d98716df83dc2d9f015f707c8377fd490621aa7086b2f76307"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.942792 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cb0d3b08-53c1-4396-9413-bf581fad715a","Type":"ContainerStarted","Data":"4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.944236 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.952922 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" event={"ID":"d44aab2e-5a28-47fc-8598-d3b6f0d0729c","Type":"ContainerDied","Data":"0dcdc3c95841ef8367c973579ef130059903b1eb84b4273204ebd5c9177c2c0b"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.953245 4750 scope.go:117] "RemoveContainer" containerID="f537f18a6824f8433961050c19bccd2521790acfe515efe9907e5f46d40c8721" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.953307 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5gtmt" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.958139 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9ccd0d36-6fed-4aeb-b811-28cf48001750","Type":"ContainerStarted","Data":"5d9735d0b4e625dd0eb8a68b36061a57a57676754559f0a692ae6637d69260f0"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.958644 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.961469 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-p2fnm" podStartSLOduration=2.190732331 podStartE2EDuration="7.961441219s" podCreationTimestamp="2026-02-14 14:13:35 +0000 UTC" firstStartedPulling="2026-02-14 14:13:36.077465002 +0000 UTC m=+1288.103454473" lastFinishedPulling="2026-02-14 14:13:41.84817388 +0000 UTC m=+1293.874163361" observedRunningTime="2026-02-14 14:13:42.948984084 +0000 UTC m=+1294.974973565" watchObservedRunningTime="2026-02-14 14:13:42.961441219 +0000 UTC m=+1294.987430720" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.966283 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8775619e-e8b9-4bee-987c-2cba92f6fcf3","Type":"ContainerStarted","Data":"40a62329bd1c2416df33820f0045cacc23b55fea50787e25c159413b6e5b66d4"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.967315 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.969426 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f513b3ee-aa21-48f3-b5fa-395f0557292a","Type":"ContainerStarted","Data":"e3dfaeac3877491d9a6ba7973adcdbda21b465a624a27b5515e9052b76c9e5d2"} Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.970105 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.980091 4750 scope.go:117] "RemoveContainer" containerID="156d4ffaf0d2885882f5e166ebfafb2f4d94ff482b506f5984a8d609919485fd" Feb 14 14:13:42 crc kubenswrapper[4750]: I0214 14:13:42.999877 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.21515377 podStartE2EDuration="53.999856652s" podCreationTimestamp="2026-02-14 14:12:49 +0000 UTC" firstStartedPulling="2026-02-14 14:13:05.880968005 +0000 UTC m=+1257.906957486" lastFinishedPulling="2026-02-14 14:13:41.665670887 +0000 UTC m=+1293.691660368" observedRunningTime="2026-02-14 14:13:42.983933329 +0000 UTC m=+1295.009922820" watchObservedRunningTime="2026-02-14 14:13:42.999856652 +0000 UTC m=+1295.025846153" Feb 14 14:13:43 crc kubenswrapper[4750]: I0214 14:13:43.025289 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=49.855088997 podStartE2EDuration="1m0.025272605s" podCreationTimestamp="2026-02-14 14:12:43 +0000 UTC" firstStartedPulling="2026-02-14 14:12:54.293475547 +0000 UTC m=+1246.319465028" lastFinishedPulling="2026-02-14 14:13:04.463659155 +0000 UTC m=+1256.489648636" observedRunningTime="2026-02-14 14:13:43.019701657 +0000 UTC m=+1295.045691138" watchObservedRunningTime="2026-02-14 14:13:43.025272605 +0000 UTC m=+1295.051262086" Feb 14 14:13:43 crc kubenswrapper[4750]: I0214 14:13:43.044093 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5gtmt"] Feb 14 14:13:43 crc kubenswrapper[4750]: I0214 14:13:43.058517 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5gtmt"] Feb 14 14:13:43 crc kubenswrapper[4750]: I0214 14:13:43.077192 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.91072472 podStartE2EDuration="1m0.077177192s" podCreationTimestamp="2026-02-14 14:12:43 +0000 UTC" firstStartedPulling="2026-02-14 14:12:54.299437736 +0000 UTC m=+1246.325427217" lastFinishedPulling="2026-02-14 14:13:04.465890218 +0000 UTC m=+1256.491879689" observedRunningTime="2026-02-14 14:13:43.076779751 +0000 UTC m=+1295.102769232" watchObservedRunningTime="2026-02-14 14:13:43.077177192 +0000 UTC m=+1295.103166673" Feb 14 14:13:43 crc kubenswrapper[4750]: I0214 14:13:43.115404 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=41.222408541 podStartE2EDuration="1m0.115386959s" podCreationTimestamp="2026-02-14 14:12:43 +0000 UTC" firstStartedPulling="2026-02-14 14:12:45.488366264 +0000 UTC m=+1237.514355745" lastFinishedPulling="2026-02-14 14:13:04.381344682 +0000 UTC m=+1256.407334163" observedRunningTime="2026-02-14 14:13:43.106541338 +0000 UTC m=+1295.132530819" watchObservedRunningTime="2026-02-14 14:13:43.115386959 +0000 UTC m=+1295.141376440" Feb 14 14:13:43 crc kubenswrapper[4750]: I0214 14:13:43.142954 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.36685119 podStartE2EDuration="1m0.142935653s" podCreationTimestamp="2026-02-14 14:12:43 +0000 UTC" firstStartedPulling="2026-02-14 14:12:45.687612823 +0000 UTC m=+1237.713602304" lastFinishedPulling="2026-02-14 14:13:04.463697286 +0000 UTC m=+1256.489686767" observedRunningTime="2026-02-14 14:13:43.137139698 +0000 UTC m=+1295.163129219" watchObservedRunningTime="2026-02-14 14:13:43.142935653 +0000 UTC m=+1295.168925134" Feb 14 14:13:43 crc kubenswrapper[4750]: I0214 14:13:43.985502 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" event={"ID":"29c1a2e7-75be-464d-96d5-21957aecf6ae","Type":"ContainerStarted","Data":"8b5dabd35dbf29eb78cac59dec247c589f0f1faba7ab4172934b742fbdf0221b"} Feb 14 14:13:43 crc kubenswrapper[4750]: I0214 14:13:43.996060 4750 generic.go:334] "Generic (PLEG): container finished" podID="441c9e61-7af7-44d7-825f-cab61e06ad3a" containerID="bb9e9fbda37e7933e06b98dd24aec68c6fae5eb809462e3e532352a24652c21e" exitCode=0 Feb 14 14:13:43 crc kubenswrapper[4750]: I0214 14:13:43.997417 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" event={"ID":"441c9e61-7af7-44d7-825f-cab61e06ad3a","Type":"ContainerDied","Data":"bb9e9fbda37e7933e06b98dd24aec68c6fae5eb809462e3e532352a24652c21e"} Feb 14 14:13:44 crc kubenswrapper[4750]: I0214 14:13:44.016429 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" podStartSLOduration=4.016400417 podStartE2EDuration="4.016400417s" podCreationTimestamp="2026-02-14 14:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:13:44.007178145 +0000 UTC m=+1296.033167636" watchObservedRunningTime="2026-02-14 14:13:44.016400417 +0000 UTC m=+1296.042389908" Feb 14 14:13:44 crc kubenswrapper[4750]: I0214 14:13:44.756038 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44aab2e-5a28-47fc-8598-d3b6f0d0729c" path="/var/lib/kubelet/pods/d44aab2e-5a28-47fc-8598-d3b6f0d0729c/volumes" Feb 14 14:13:44 crc kubenswrapper[4750]: I0214 14:13:44.881755 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8x2g7"] Feb 14 14:13:44 crc kubenswrapper[4750]: E0214 14:13:44.882368 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44aab2e-5a28-47fc-8598-d3b6f0d0729c" containerName="init" Feb 14 14:13:44 crc kubenswrapper[4750]: I0214 14:13:44.882385 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44aab2e-5a28-47fc-8598-d3b6f0d0729c" containerName="init" Feb 14 14:13:44 crc kubenswrapper[4750]: E0214 14:13:44.882401 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44aab2e-5a28-47fc-8598-d3b6f0d0729c" containerName="dnsmasq-dns" Feb 14 14:13:44 crc kubenswrapper[4750]: I0214 14:13:44.882407 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44aab2e-5a28-47fc-8598-d3b6f0d0729c" containerName="dnsmasq-dns" Feb 14 14:13:44 crc kubenswrapper[4750]: I0214 14:13:44.882571 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44aab2e-5a28-47fc-8598-d3b6f0d0729c" containerName="dnsmasq-dns" Feb 14 14:13:44 crc kubenswrapper[4750]: I0214 14:13:44.883274 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:44 crc kubenswrapper[4750]: I0214 14:13:44.884759 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 14 14:13:44 crc kubenswrapper[4750]: I0214 14:13:44.900089 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8x2g7"] Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.012285 4750 generic.go:334] "Generic (PLEG): container finished" podID="29c1a2e7-75be-464d-96d5-21957aecf6ae" containerID="8b5dabd35dbf29eb78cac59dec247c589f0f1faba7ab4172934b742fbdf0221b" exitCode=0 Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.012635 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" event={"ID":"29c1a2e7-75be-464d-96d5-21957aecf6ae","Type":"ContainerDied","Data":"8b5dabd35dbf29eb78cac59dec247c589f0f1faba7ab4172934b742fbdf0221b"} Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.066892 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94bp\" (UniqueName: \"kubernetes.io/projected/6a157ce0-bc33-4240-ba88-700769023ed0-kube-api-access-j94bp\") pod \"root-account-create-update-8x2g7\" (UID: \"6a157ce0-bc33-4240-ba88-700769023ed0\") " pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.066990 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a157ce0-bc33-4240-ba88-700769023ed0-operator-scripts\") pod \"root-account-create-update-8x2g7\" (UID: \"6a157ce0-bc33-4240-ba88-700769023ed0\") " pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.168609 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j94bp\" (UniqueName: \"kubernetes.io/projected/6a157ce0-bc33-4240-ba88-700769023ed0-kube-api-access-j94bp\") pod \"root-account-create-update-8x2g7\" (UID: \"6a157ce0-bc33-4240-ba88-700769023ed0\") " pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.168686 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a157ce0-bc33-4240-ba88-700769023ed0-operator-scripts\") pod \"root-account-create-update-8x2g7\" (UID: \"6a157ce0-bc33-4240-ba88-700769023ed0\") " pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.169494 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a157ce0-bc33-4240-ba88-700769023ed0-operator-scripts\") pod \"root-account-create-update-8x2g7\" (UID: \"6a157ce0-bc33-4240-ba88-700769023ed0\") " pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.195361 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j94bp\" (UniqueName: \"kubernetes.io/projected/6a157ce0-bc33-4240-ba88-700769023ed0-kube-api-access-j94bp\") pod \"root-account-create-update-8x2g7\" (UID: \"6a157ce0-bc33-4240-ba88-700769023ed0\") " pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.203221 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.620729 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.781374 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/441c9e61-7af7-44d7-825f-cab61e06ad3a-operator-scripts\") pod \"441c9e61-7af7-44d7-825f-cab61e06ad3a\" (UID: \"441c9e61-7af7-44d7-825f-cab61e06ad3a\") " Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.781498 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x84lr\" (UniqueName: \"kubernetes.io/projected/441c9e61-7af7-44d7-825f-cab61e06ad3a-kube-api-access-x84lr\") pod \"441c9e61-7af7-44d7-825f-cab61e06ad3a\" (UID: \"441c9e61-7af7-44d7-825f-cab61e06ad3a\") " Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.782268 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441c9e61-7af7-44d7-825f-cab61e06ad3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "441c9e61-7af7-44d7-825f-cab61e06ad3a" (UID: "441c9e61-7af7-44d7-825f-cab61e06ad3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.788393 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441c9e61-7af7-44d7-825f-cab61e06ad3a-kube-api-access-x84lr" (OuterVolumeSpecName: "kube-api-access-x84lr") pod "441c9e61-7af7-44d7-825f-cab61e06ad3a" (UID: "441c9e61-7af7-44d7-825f-cab61e06ad3a"). InnerVolumeSpecName "kube-api-access-x84lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.813359 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8x2g7"] Feb 14 14:13:45 crc kubenswrapper[4750]: W0214 14:13:45.827367 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a157ce0_bc33_4240_ba88_700769023ed0.slice/crio-8a82a1883be564cd4c4ca858a1bb5c820352b2556c19718b97cdbe01f71ce987 WatchSource:0}: Error finding container 8a82a1883be564cd4c4ca858a1bb5c820352b2556c19718b97cdbe01f71ce987: Status 404 returned error can't find the container with id 8a82a1883be564cd4c4ca858a1bb5c820352b2556c19718b97cdbe01f71ce987 Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.886411 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/441c9e61-7af7-44d7-825f-cab61e06ad3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:45 crc kubenswrapper[4750]: I0214 14:13:45.886442 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x84lr\" (UniqueName: \"kubernetes.io/projected/441c9e61-7af7-44d7-825f-cab61e06ad3a-kube-api-access-x84lr\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.026079 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" event={"ID":"441c9e61-7af7-44d7-825f-cab61e06ad3a","Type":"ContainerDied","Data":"023cbcf05fae49b7fe7145f0f1570636ff432499b07f916d6574efc7dc103eec"} Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.026316 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023cbcf05fae49b7fe7145f0f1570636ff432499b07f916d6574efc7dc103eec" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.026090 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.032575 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8x2g7" event={"ID":"6a157ce0-bc33-4240-ba88-700769023ed0","Type":"ContainerStarted","Data":"d6171091c2782e516f7486079df8af2fcbc9c26f34ce7290d45023050ac124ea"} Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.032612 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8x2g7" event={"ID":"6a157ce0-bc33-4240-ba88-700769023ed0","Type":"ContainerStarted","Data":"8a82a1883be564cd4c4ca858a1bb5c820352b2556c19718b97cdbe01f71ce987"} Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.056685 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-8x2g7" podStartSLOduration=2.056659482 podStartE2EDuration="2.056659482s" podCreationTimestamp="2026-02-14 14:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:13:46.047405059 +0000 UTC m=+1298.073394540" watchObservedRunningTime="2026-02-14 14:13:46.056659482 +0000 UTC m=+1298.082648963" Feb 14 14:13:46 crc kubenswrapper[4750]: E0214 14:13:46.160303 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod441c9e61_7af7_44d7_825f_cab61e06ad3a.slice/crio-023cbcf05fae49b7fe7145f0f1570636ff432499b07f916d6574efc7dc103eec\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod441c9e61_7af7_44d7_825f_cab61e06ad3a.slice\": RecentStats: unable to find data in memory cache]" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.373617 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.507428 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.603253 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9pbr\" (UniqueName: \"kubernetes.io/projected/29c1a2e7-75be-464d-96d5-21957aecf6ae-kube-api-access-c9pbr\") pod \"29c1a2e7-75be-464d-96d5-21957aecf6ae\" (UID: \"29c1a2e7-75be-464d-96d5-21957aecf6ae\") " Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.603570 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c1a2e7-75be-464d-96d5-21957aecf6ae-operator-scripts\") pod \"29c1a2e7-75be-464d-96d5-21957aecf6ae\" (UID: \"29c1a2e7-75be-464d-96d5-21957aecf6ae\") " Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.604128 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c1a2e7-75be-464d-96d5-21957aecf6ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29c1a2e7-75be-464d-96d5-21957aecf6ae" (UID: "29c1a2e7-75be-464d-96d5-21957aecf6ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.610046 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c1a2e7-75be-464d-96d5-21957aecf6ae-kube-api-access-c9pbr" (OuterVolumeSpecName: "kube-api-access-c9pbr") pod "29c1a2e7-75be-464d-96d5-21957aecf6ae" (UID: "29c1a2e7-75be-464d-96d5-21957aecf6ae"). InnerVolumeSpecName "kube-api-access-c9pbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.706253 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c1a2e7-75be-464d-96d5-21957aecf6ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.706296 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9pbr\" (UniqueName: \"kubernetes.io/projected/29c1a2e7-75be-464d-96d5-21957aecf6ae-kube-api-access-c9pbr\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:46 crc kubenswrapper[4750]: I0214 14:13:46.910250 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:13:46 crc kubenswrapper[4750]: E0214 14:13:46.910490 4750 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 14:13:46 crc kubenswrapper[4750]: E0214 14:13:46.910523 4750 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 14:13:46 crc kubenswrapper[4750]: E0214 14:13:46.910587 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift podName:e623022c-0cda-4463-b5e1-3157a1f8c1c1 nodeName:}" failed. No retries permitted until 2026-02-14 14:14:02.910566606 +0000 UTC m=+1314.936556087 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift") pod "swift-storage-0" (UID: "e623022c-0cda-4463-b5e1-3157a1f8c1c1") : configmap "swift-ring-files" not found Feb 14 14:13:47 crc kubenswrapper[4750]: I0214 14:13:47.048442 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" Feb 14 14:13:47 crc kubenswrapper[4750]: I0214 14:13:47.048569 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2462-account-create-update-2ctnw" event={"ID":"29c1a2e7-75be-464d-96d5-21957aecf6ae","Type":"ContainerDied","Data":"057fe1657da690a39f75a707d165e8e0f2765eb1234f6d00628d0ecd4bfda480"} Feb 14 14:13:47 crc kubenswrapper[4750]: I0214 14:13:47.048606 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057fe1657da690a39f75a707d165e8e0f2765eb1234f6d00628d0ecd4bfda480" Feb 14 14:13:47 crc kubenswrapper[4750]: I0214 14:13:47.050256 4750 generic.go:334] "Generic (PLEG): container finished" podID="6a157ce0-bc33-4240-ba88-700769023ed0" containerID="d6171091c2782e516f7486079df8af2fcbc9c26f34ce7290d45023050ac124ea" exitCode=0 Feb 14 14:13:47 crc kubenswrapper[4750]: I0214 14:13:47.050285 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8x2g7" event={"ID":"6a157ce0-bc33-4240-ba88-700769023ed0","Type":"ContainerDied","Data":"d6171091c2782e516f7486079df8af2fcbc9c26f34ce7290d45023050ac124ea"} Feb 14 14:13:47 crc kubenswrapper[4750]: I0214 14:13:47.796779 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4dn6h" podUID="761260d8-59af-48eb-bb26-aa7523be2d9d" containerName="ovn-controller" probeResult="failure" output=< Feb 14 14:13:47 crc kubenswrapper[4750]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 14:13:47 crc kubenswrapper[4750]: > Feb 14 14:13:47 crc kubenswrapper[4750]: I0214 14:13:47.880377 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:13:47 crc kubenswrapper[4750]: I0214 14:13:47.895756 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bpd75" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.117999 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4dn6h-config-dfxcq"] Feb 14 14:13:48 crc kubenswrapper[4750]: E0214 14:13:48.118457 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c1a2e7-75be-464d-96d5-21957aecf6ae" containerName="mariadb-account-create-update" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.118473 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c1a2e7-75be-464d-96d5-21957aecf6ae" containerName="mariadb-account-create-update" Feb 14 14:13:48 crc kubenswrapper[4750]: E0214 14:13:48.118509 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441c9e61-7af7-44d7-825f-cab61e06ad3a" containerName="mariadb-database-create" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.118515 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="441c9e61-7af7-44d7-825f-cab61e06ad3a" containerName="mariadb-database-create" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.118698 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="441c9e61-7af7-44d7-825f-cab61e06ad3a" containerName="mariadb-database-create" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.118717 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c1a2e7-75be-464d-96d5-21957aecf6ae" containerName="mariadb-account-create-update" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.119449 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.122277 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.131941 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4dn6h-config-dfxcq"] Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.240848 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run-ovn\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.240911 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-scripts\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.240938 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.240968 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-additional-scripts\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.241029 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbkjm\" (UniqueName: \"kubernetes.io/projected/7e14714d-89e0-4029-9af7-6762f1b284b8-kube-api-access-vbkjm\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.241066 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-log-ovn\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.343056 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run-ovn\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.343343 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-scripts\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.343379 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.343402 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-additional-scripts\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.343452 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbkjm\" (UniqueName: \"kubernetes.io/projected/7e14714d-89e0-4029-9af7-6762f1b284b8-kube-api-access-vbkjm\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.343479 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-log-ovn\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.343503 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run-ovn\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.343595 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-log-ovn\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.343845 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.344408 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-additional-scripts\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.345353 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-scripts\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.366706 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbkjm\" (UniqueName: \"kubernetes.io/projected/7e14714d-89e0-4029-9af7-6762f1b284b8-kube-api-access-vbkjm\") pod \"ovn-controller-4dn6h-config-dfxcq\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.466394 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.633140 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.751834 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j94bp\" (UniqueName: \"kubernetes.io/projected/6a157ce0-bc33-4240-ba88-700769023ed0-kube-api-access-j94bp\") pod \"6a157ce0-bc33-4240-ba88-700769023ed0\" (UID: \"6a157ce0-bc33-4240-ba88-700769023ed0\") " Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.751986 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a157ce0-bc33-4240-ba88-700769023ed0-operator-scripts\") pod \"6a157ce0-bc33-4240-ba88-700769023ed0\" (UID: \"6a157ce0-bc33-4240-ba88-700769023ed0\") " Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.753378 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a157ce0-bc33-4240-ba88-700769023ed0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a157ce0-bc33-4240-ba88-700769023ed0" (UID: "6a157ce0-bc33-4240-ba88-700769023ed0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.762338 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a157ce0-bc33-4240-ba88-700769023ed0-kube-api-access-j94bp" (OuterVolumeSpecName: "kube-api-access-j94bp") pod "6a157ce0-bc33-4240-ba88-700769023ed0" (UID: "6a157ce0-bc33-4240-ba88-700769023ed0"). InnerVolumeSpecName "kube-api-access-j94bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.854443 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j94bp\" (UniqueName: \"kubernetes.io/projected/6a157ce0-bc33-4240-ba88-700769023ed0-kube-api-access-j94bp\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.854472 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a157ce0-bc33-4240-ba88-700769023ed0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:48 crc kubenswrapper[4750]: I0214 14:13:48.985248 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4dn6h-config-dfxcq"] Feb 14 14:13:48 crc kubenswrapper[4750]: W0214 14:13:48.994055 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e14714d_89e0_4029_9af7_6762f1b284b8.slice/crio-35f370958abed278bb6a20b805f85c092270272fa09e630ae189c1fcdaccccc7 WatchSource:0}: Error finding container 35f370958abed278bb6a20b805f85c092270272fa09e630ae189c1fcdaccccc7: Status 404 returned error can't find the container with id 35f370958abed278bb6a20b805f85c092270272fa09e630ae189c1fcdaccccc7 Feb 14 14:13:49 crc kubenswrapper[4750]: I0214 14:13:49.067856 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h-config-dfxcq" event={"ID":"7e14714d-89e0-4029-9af7-6762f1b284b8","Type":"ContainerStarted","Data":"35f370958abed278bb6a20b805f85c092270272fa09e630ae189c1fcdaccccc7"} Feb 14 14:13:49 crc kubenswrapper[4750]: I0214 14:13:49.070492 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8x2g7" event={"ID":"6a157ce0-bc33-4240-ba88-700769023ed0","Type":"ContainerDied","Data":"8a82a1883be564cd4c4ca858a1bb5c820352b2556c19718b97cdbe01f71ce987"} Feb 14 14:13:49 crc kubenswrapper[4750]: I0214 14:13:49.070522 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a82a1883be564cd4c4ca858a1bb5c820352b2556c19718b97cdbe01f71ce987" Feb 14 14:13:49 crc kubenswrapper[4750]: I0214 14:13:49.070580 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8x2g7" Feb 14 14:13:50 crc kubenswrapper[4750]: I0214 14:13:50.081011 4750 generic.go:334] "Generic (PLEG): container finished" podID="7e14714d-89e0-4029-9af7-6762f1b284b8" containerID="1c370397b4b256bfc94c924656276de11640d4f24305b3d390f5a20c6ee99fd7" exitCode=0 Feb 14 14:13:50 crc kubenswrapper[4750]: I0214 14:13:50.081178 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h-config-dfxcq" event={"ID":"7e14714d-89e0-4029-9af7-6762f1b284b8","Type":"ContainerDied","Data":"1c370397b4b256bfc94c924656276de11640d4f24305b3d390f5a20c6ee99fd7"} Feb 14 14:13:50 crc kubenswrapper[4750]: I0214 14:13:50.084766 4750 generic.go:334] "Generic (PLEG): container finished" podID="e6aba344-d824-42da-996e-733b7480a2eb" containerID="23b0cd9d5bdd18d0b2f464e28953d1d981f710ab2da396d8b8499f2ce01e8be7" exitCode=0 Feb 14 14:13:50 crc kubenswrapper[4750]: I0214 14:13:50.084803 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p2fnm" event={"ID":"e6aba344-d824-42da-996e-733b7480a2eb","Type":"ContainerDied","Data":"23b0cd9d5bdd18d0b2f464e28953d1d981f710ab2da396d8b8499f2ce01e8be7"} Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.042749 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 14:13:51 crc kubenswrapper[4750]: E0214 14:13:51.043500 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a157ce0-bc33-4240-ba88-700769023ed0" containerName="mariadb-account-create-update" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.043530 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a157ce0-bc33-4240-ba88-700769023ed0" containerName="mariadb-account-create-update" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.043758 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a157ce0-bc33-4240-ba88-700769023ed0" containerName="mariadb-account-create-update" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.044604 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.046938 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.083008 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.187295 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8x2g7"] Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.207333 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.207772 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvbq\" (UniqueName: \"kubernetes.io/projected/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-kube-api-access-lvvbq\") pod \"mysqld-exporter-0\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.207820 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-config-data\") pod \"mysqld-exporter-0\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.211346 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8x2g7"] Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.313229 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvbq\" (UniqueName: \"kubernetes.io/projected/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-kube-api-access-lvvbq\") pod \"mysqld-exporter-0\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.313324 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-config-data\") pod \"mysqld-exporter-0\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.313473 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.319758 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.323678 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-config-data\") pod \"mysqld-exporter-0\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.344514 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvbq\" (UniqueName: \"kubernetes.io/projected/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-kube-api-access-lvvbq\") pod \"mysqld-exporter-0\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.367554 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.376627 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:51 crc kubenswrapper[4750]: I0214 14:13:51.392822 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:52 crc kubenswrapper[4750]: I0214 14:13:52.109142 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:52 crc kubenswrapper[4750]: I0214 14:13:52.754457 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a157ce0-bc33-4240-ba88-700769023ed0" path="/var/lib/kubelet/pods/6a157ce0-bc33-4240-ba88-700769023ed0/volumes" Feb 14 14:13:52 crc kubenswrapper[4750]: I0214 14:13:52.790513 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4dn6h" Feb 14 14:13:54 crc kubenswrapper[4750]: I0214 14:13:54.961868 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 14 14:13:55 crc kubenswrapper[4750]: I0214 14:13:55.172585 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 14 14:13:55 crc kubenswrapper[4750]: I0214 14:13:55.218268 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 14:13:55 crc kubenswrapper[4750]: I0214 14:13:55.218884 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="prometheus" containerID="cri-o://abde33cab497e7db6cf9f65c9368641100a293e5fd8c912a95a23097f661a56f" gracePeriod=600 Feb 14 14:13:55 crc kubenswrapper[4750]: I0214 14:13:55.219468 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="thanos-sidecar" containerID="cri-o://8366780ea09f93d98716df83dc2d9f015f707c8377fd490621aa7086b2f76307" gracePeriod=600 Feb 14 14:13:55 crc kubenswrapper[4750]: I0214 14:13:55.219800 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="config-reloader" containerID="cri-o://497eea6247b9916243db87178dca6c17e6136acdc43c2d05e1240c8250745d28" gracePeriod=600 Feb 14 14:13:55 crc kubenswrapper[4750]: I0214 14:13:55.220224 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="cb0d3b08-53c1-4396-9413-bf581fad715a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 14 14:13:55 crc kubenswrapper[4750]: I0214 14:13:55.232296 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.158093 4750 generic.go:334] "Generic (PLEG): container finished" podID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerID="8366780ea09f93d98716df83dc2d9f015f707c8377fd490621aa7086b2f76307" exitCode=0 Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.158152 4750 generic.go:334] "Generic (PLEG): container finished" podID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerID="497eea6247b9916243db87178dca6c17e6136acdc43c2d05e1240c8250745d28" exitCode=0 Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.158163 4750 generic.go:334] "Generic (PLEG): container finished" podID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerID="abde33cab497e7db6cf9f65c9368641100a293e5fd8c912a95a23097f661a56f" exitCode=0 Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.158162 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerDied","Data":"8366780ea09f93d98716df83dc2d9f015f707c8377fd490621aa7086b2f76307"} Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.158206 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerDied","Data":"497eea6247b9916243db87178dca6c17e6136acdc43c2d05e1240c8250745d28"} Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.158218 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerDied","Data":"abde33cab497e7db6cf9f65c9368641100a293e5fd8c912a95a23097f661a56f"} Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.210817 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xg2ph"] Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.213574 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xg2ph" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.222398 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.228886 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xg2ph"] Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.331616 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzgwf\" (UniqueName: \"kubernetes.io/projected/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-kube-api-access-mzgwf\") pod \"root-account-create-update-xg2ph\" (UID: \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\") " pod="openstack/root-account-create-update-xg2ph" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.332038 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-operator-scripts\") pod \"root-account-create-update-xg2ph\" (UID: \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\") " pod="openstack/root-account-create-update-xg2ph" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.373638 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.137:9090/-/ready\": dial tcp 10.217.0.137:9090: connect: connection refused" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.434235 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzgwf\" (UniqueName: \"kubernetes.io/projected/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-kube-api-access-mzgwf\") pod \"root-account-create-update-xg2ph\" (UID: \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\") " pod="openstack/root-account-create-update-xg2ph" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.434504 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-operator-scripts\") pod \"root-account-create-update-xg2ph\" (UID: \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\") " pod="openstack/root-account-create-update-xg2ph" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.435438 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-operator-scripts\") pod \"root-account-create-update-xg2ph\" (UID: \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\") " pod="openstack/root-account-create-update-xg2ph" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.456021 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzgwf\" (UniqueName: \"kubernetes.io/projected/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-kube-api-access-mzgwf\") pod \"root-account-create-update-xg2ph\" (UID: \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\") " pod="openstack/root-account-create-update-xg2ph" Feb 14 14:13:56 crc kubenswrapper[4750]: I0214 14:13:56.533431 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xg2ph" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.867072 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.900102 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.974668 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-combined-ca-bundle\") pod \"e6aba344-d824-42da-996e-733b7480a2eb\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.974752 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-scripts\") pod \"7e14714d-89e0-4029-9af7-6762f1b284b8\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.974828 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbkjm\" (UniqueName: \"kubernetes.io/projected/7e14714d-89e0-4029-9af7-6762f1b284b8-kube-api-access-vbkjm\") pod \"7e14714d-89e0-4029-9af7-6762f1b284b8\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.974868 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-dispersionconf\") pod \"e6aba344-d824-42da-996e-733b7480a2eb\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.974888 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-ring-data-devices\") pod \"e6aba344-d824-42da-996e-733b7480a2eb\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.974905 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-log-ovn\") pod \"7e14714d-89e0-4029-9af7-6762f1b284b8\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.974931 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-swiftconf\") pod \"e6aba344-d824-42da-996e-733b7480a2eb\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.974973 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cscgd\" (UniqueName: \"kubernetes.io/projected/e6aba344-d824-42da-996e-733b7480a2eb-kube-api-access-cscgd\") pod \"e6aba344-d824-42da-996e-733b7480a2eb\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.975028 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-additional-scripts\") pod \"7e14714d-89e0-4029-9af7-6762f1b284b8\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.975038 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7e14714d-89e0-4029-9af7-6762f1b284b8" (UID: "7e14714d-89e0-4029-9af7-6762f1b284b8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.975062 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run\") pod \"7e14714d-89e0-4029-9af7-6762f1b284b8\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.975082 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run" (OuterVolumeSpecName: "var-run") pod "7e14714d-89e0-4029-9af7-6762f1b284b8" (UID: "7e14714d-89e0-4029-9af7-6762f1b284b8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.975278 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6aba344-d824-42da-996e-733b7480a2eb-etc-swift\") pod \"e6aba344-d824-42da-996e-733b7480a2eb\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.975322 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-scripts\") pod \"e6aba344-d824-42da-996e-733b7480a2eb\" (UID: \"e6aba344-d824-42da-996e-733b7480a2eb\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.975365 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run-ovn\") pod \"7e14714d-89e0-4029-9af7-6762f1b284b8\" (UID: \"7e14714d-89e0-4029-9af7-6762f1b284b8\") " Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.975593 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e6aba344-d824-42da-996e-733b7480a2eb" (UID: "e6aba344-d824-42da-996e-733b7480a2eb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.975726 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7e14714d-89e0-4029-9af7-6762f1b284b8" (UID: "7e14714d-89e0-4029-9af7-6762f1b284b8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.976316 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6aba344-d824-42da-996e-733b7480a2eb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e6aba344-d824-42da-996e-733b7480a2eb" (UID: "e6aba344-d824-42da-996e-733b7480a2eb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.976746 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-scripts" (OuterVolumeSpecName: "scripts") pod "7e14714d-89e0-4029-9af7-6762f1b284b8" (UID: "7e14714d-89e0-4029-9af7-6762f1b284b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.976762 4750 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.977061 4750 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6aba344-d824-42da-996e-733b7480a2eb-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.977076 4750 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.977087 4750 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.977099 4750 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7e14714d-89e0-4029-9af7-6762f1b284b8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.977460 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7e14714d-89e0-4029-9af7-6762f1b284b8" (UID: "7e14714d-89e0-4029-9af7-6762f1b284b8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:57 crc kubenswrapper[4750]: I0214 14:13:57.986250 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e14714d-89e0-4029-9af7-6762f1b284b8-kube-api-access-vbkjm" (OuterVolumeSpecName: "kube-api-access-vbkjm") pod "7e14714d-89e0-4029-9af7-6762f1b284b8" (UID: "7e14714d-89e0-4029-9af7-6762f1b284b8"). InnerVolumeSpecName "kube-api-access-vbkjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.009319 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e6aba344-d824-42da-996e-733b7480a2eb" (UID: "e6aba344-d824-42da-996e-733b7480a2eb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.009451 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6aba344-d824-42da-996e-733b7480a2eb-kube-api-access-cscgd" (OuterVolumeSpecName: "kube-api-access-cscgd") pod "e6aba344-d824-42da-996e-733b7480a2eb" (UID: "e6aba344-d824-42da-996e-733b7480a2eb"). InnerVolumeSpecName "kube-api-access-cscgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.010067 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-scripts" (OuterVolumeSpecName: "scripts") pod "e6aba344-d824-42da-996e-733b7480a2eb" (UID: "e6aba344-d824-42da-996e-733b7480a2eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.020488 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6aba344-d824-42da-996e-733b7480a2eb" (UID: "e6aba344-d824-42da-996e-733b7480a2eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.061044 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e6aba344-d824-42da-996e-733b7480a2eb" (UID: "e6aba344-d824-42da-996e-733b7480a2eb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.079526 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.079557 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.079567 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbkjm\" (UniqueName: \"kubernetes.io/projected/7e14714d-89e0-4029-9af7-6762f1b284b8-kube-api-access-vbkjm\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.079578 4750 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.079587 4750 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6aba344-d824-42da-996e-733b7480a2eb-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.079596 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cscgd\" (UniqueName: \"kubernetes.io/projected/e6aba344-d824-42da-996e-733b7480a2eb-kube-api-access-cscgd\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.079606 4750 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7e14714d-89e0-4029-9af7-6762f1b284b8-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.079618 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6aba344-d824-42da-996e-733b7480a2eb-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.107130 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.180536 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-0\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.180607 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-tls-assets\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.180729 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gzbx\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-kube-api-access-7gzbx\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.180766 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/012f3d60-0015-4f72-b414-8eb4f633f0f3-config-out\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.180790 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-2\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.180959 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-1\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.180994 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-config\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.180994 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.181078 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.181141 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-thanos-prometheus-http-client-file\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.181177 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-web-config\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.181809 4750 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.184955 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/012f3d60-0015-4f72-b414-8eb4f633f0f3-config-out" (OuterVolumeSpecName: "config-out") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.186352 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.186692 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.188459 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.195518 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.197621 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-config" (OuterVolumeSpecName: "config") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.199761 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p2fnm" event={"ID":"e6aba344-d824-42da-996e-733b7480a2eb","Type":"ContainerDied","Data":"4c23fb56850f47fbbdedaac77587f612e9f074f523132ee0643d1a566e55589d"} Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.199822 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c23fb56850f47fbbdedaac77587f612e9f074f523132ee0643d1a566e55589d" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.199891 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p2fnm" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.208669 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-kube-api-access-7gzbx" (OuterVolumeSpecName: "kube-api-access-7gzbx") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "kube-api-access-7gzbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: E0214 14:13:58.210053 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-web-config podName:012f3d60-0015-4f72-b414-8eb4f633f0f3 nodeName:}" failed. No retries permitted until 2026-02-14 14:13:58.710022397 +0000 UTC m=+1310.736011878 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "web-config" (UniqueName: "kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-web-config") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3") : error deleting /var/lib/kubelet/pods/012f3d60-0015-4f72-b414-8eb4f633f0f3/volume-subpaths: remove /var/lib/kubelet/pods/012f3d60-0015-4f72-b414-8eb4f633f0f3/volume-subpaths: no such file or directory Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.212426 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.212608 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"012f3d60-0015-4f72-b414-8eb4f633f0f3","Type":"ContainerDied","Data":"18ca32401ff5ded06bdf4d3591f6258a4b267bef449c5d106d2d4aaff52c3188"} Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.212691 4750 scope.go:117] "RemoveContainer" containerID="8366780ea09f93d98716df83dc2d9f015f707c8377fd490621aa7086b2f76307" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.226292 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h-config-dfxcq" event={"ID":"7e14714d-89e0-4029-9af7-6762f1b284b8","Type":"ContainerDied","Data":"35f370958abed278bb6a20b805f85c092270272fa09e630ae189c1fcdaccccc7"} Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.226325 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-dfxcq" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.226333 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35f370958abed278bb6a20b805f85c092270272fa09e630ae189c1fcdaccccc7" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.232995 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "pvc-c52b4e4a-a039-4845-bf9f-2855ce742360". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.271609 4750 scope.go:117] "RemoveContainer" containerID="497eea6247b9916243db87178dca6c17e6136acdc43c2d05e1240c8250745d28" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.284062 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gzbx\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-kube-api-access-7gzbx\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.284128 4750 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/012f3d60-0015-4f72-b414-8eb4f633f0f3-config-out\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.284144 4750 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.284161 4750 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/012f3d60-0015-4f72-b414-8eb4f633f0f3-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.284176 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.284207 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") on node \"crc\" " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.284228 4750 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.284241 4750 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/012f3d60-0015-4f72-b414-8eb4f633f0f3-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.321493 4750 scope.go:117] "RemoveContainer" containerID="abde33cab497e7db6cf9f65c9368641100a293e5fd8c912a95a23097f661a56f" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.327150 4750 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.327339 4750 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c52b4e4a-a039-4845-bf9f-2855ce742360" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360") on node "crc" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.335508 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xg2ph"] Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.351557 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.368715 4750 scope.go:117] "RemoveContainer" containerID="856c7e3b80fdff9660116d588a1dfeda0e7deb6bcbc8d3a246a83853fb939ae9" Feb 14 14:13:58 crc kubenswrapper[4750]: W0214 14:13:58.375405 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefe4bd73_1ac8_4b3e_8ad0_fdd9439c4d4f.slice/crio-06538ee2dc5b6dae685e7fee74aa0d771443e642b8645f6765c991abfb713b78 WatchSource:0}: Error finding container 06538ee2dc5b6dae685e7fee74aa0d771443e642b8645f6765c991abfb713b78: Status 404 returned error can't find the container with id 06538ee2dc5b6dae685e7fee74aa0d771443e642b8645f6765c991abfb713b78 Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.385894 4750 reconciler_common.go:293] "Volume detached for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.792563 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-web-config\") pod \"012f3d60-0015-4f72-b414-8eb4f633f0f3\" (UID: \"012f3d60-0015-4f72-b414-8eb4f633f0f3\") " Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.797704 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-web-config" (OuterVolumeSpecName: "web-config") pod "012f3d60-0015-4f72-b414-8eb4f633f0f3" (UID: "012f3d60-0015-4f72-b414-8eb4f633f0f3"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.885518 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.895096 4750 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/012f3d60-0015-4f72-b414-8eb4f633f0f3-web-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.904729 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.916443 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 14:13:58 crc kubenswrapper[4750]: E0214 14:13:58.916928 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e14714d-89e0-4029-9af7-6762f1b284b8" containerName="ovn-config" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.916946 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e14714d-89e0-4029-9af7-6762f1b284b8" containerName="ovn-config" Feb 14 14:13:58 crc kubenswrapper[4750]: E0214 14:13:58.916960 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="config-reloader" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.916967 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="config-reloader" Feb 14 14:13:58 crc kubenswrapper[4750]: E0214 14:13:58.916987 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="init-config-reloader" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.916994 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="init-config-reloader" Feb 14 14:13:58 crc kubenswrapper[4750]: E0214 14:13:58.917015 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="thanos-sidecar" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.917021 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="thanos-sidecar" Feb 14 14:13:58 crc kubenswrapper[4750]: E0214 14:13:58.917036 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="prometheus" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.917042 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="prometheus" Feb 14 14:13:58 crc kubenswrapper[4750]: E0214 14:13:58.917057 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6aba344-d824-42da-996e-733b7480a2eb" containerName="swift-ring-rebalance" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.917064 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6aba344-d824-42da-996e-733b7480a2eb" containerName="swift-ring-rebalance" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.917287 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="config-reloader" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.917309 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e14714d-89e0-4029-9af7-6762f1b284b8" containerName="ovn-config" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.917327 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="thanos-sidecar" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.917338 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" containerName="prometheus" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.917353 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6aba344-d824-42da-996e-733b7480a2eb" containerName="swift-ring-rebalance" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.919288 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.922753 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.922773 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.923432 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.923762 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.924205 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.924364 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-f7td5" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.924603 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.927504 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.935874 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.943568 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997318 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7cdc1f12-6f04-4860-9536-32178d28e2b7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997370 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7cdc1f12-6f04-4860-9536-32178d28e2b7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997389 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997484 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-config\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997505 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997523 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997542 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997561 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpbhx\" (UniqueName: \"kubernetes.io/projected/7cdc1f12-6f04-4860-9536-32178d28e2b7-kube-api-access-dpbhx\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997588 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7cdc1f12-6f04-4860-9536-32178d28e2b7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997759 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7cdc1f12-6f04-4860-9536-32178d28e2b7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.997896 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7cdc1f12-6f04-4860-9536-32178d28e2b7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.998013 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:58 crc kubenswrapper[4750]: I0214 14:13:58.998169 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.011828 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4dn6h-config-dfxcq"] Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.024408 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4dn6h-config-dfxcq"] Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.047259 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4dn6h-config-4b6rh"] Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.048497 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.051519 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.091536 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4dn6h-config-4b6rh"] Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.100651 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7cdc1f12-6f04-4860-9536-32178d28e2b7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.100712 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-additional-scripts\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.100765 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-scripts\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.100788 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7cdc1f12-6f04-4860-9536-32178d28e2b7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.100848 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7cdc1f12-6f04-4860-9536-32178d28e2b7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.100876 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.100921 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101057 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101103 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7cdc1f12-6f04-4860-9536-32178d28e2b7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101139 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-log-ovn\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101161 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzhlg\" (UniqueName: \"kubernetes.io/projected/150b6abc-5c50-4ece-b665-7003ecf481f7-kube-api-access-gzhlg\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101188 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7cdc1f12-6f04-4860-9536-32178d28e2b7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101211 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101270 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run-ovn\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101327 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-config\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101352 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101376 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101399 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.101420 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpbhx\" (UniqueName: \"kubernetes.io/projected/7cdc1f12-6f04-4860-9536-32178d28e2b7-kube-api-access-dpbhx\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.102471 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7cdc1f12-6f04-4860-9536-32178d28e2b7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.110202 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.110590 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7cdc1f12-6f04-4860-9536-32178d28e2b7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.110679 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7cdc1f12-6f04-4860-9536-32178d28e2b7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.110797 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.110895 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.112158 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-config\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.113149 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.117914 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpbhx\" (UniqueName: \"kubernetes.io/projected/7cdc1f12-6f04-4860-9536-32178d28e2b7-kube-api-access-dpbhx\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.114367 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7cdc1f12-6f04-4860-9536-32178d28e2b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.184434 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7cdc1f12-6f04-4860-9536-32178d28e2b7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.184555 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7cdc1f12-6f04-4860-9536-32178d28e2b7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.203383 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.203450 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-log-ovn\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.203482 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzhlg\" (UniqueName: \"kubernetes.io/projected/150b6abc-5c50-4ece-b665-7003ecf481f7-kube-api-access-gzhlg\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.203548 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run-ovn\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.203615 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-additional-scripts\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.203650 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-scripts\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.204597 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.204746 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-log-ovn\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.204856 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run-ovn\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.205331 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-additional-scripts\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.205855 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-scripts\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.216213 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.216258 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/12ac80474dd276c57c1627a564c6c6310977fcdefa749db84ccb04d187d2f877/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.265466 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzhlg\" (UniqueName: \"kubernetes.io/projected/150b6abc-5c50-4ece-b665-7003ecf481f7-kube-api-access-gzhlg\") pod \"ovn-controller-4dn6h-config-4b6rh\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.272867 4750 generic.go:334] "Generic (PLEG): container finished" podID="bcc5808f-77c2-4e08-9feb-f8f27081f7c8" containerID="6b08cbfc6b90535c3b07afebb7bf8ed5cc09772f998c7b8ec5eace16eb76c1b0" exitCode=0 Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.272932 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xg2ph" event={"ID":"bcc5808f-77c2-4e08-9feb-f8f27081f7c8","Type":"ContainerDied","Data":"6b08cbfc6b90535c3b07afebb7bf8ed5cc09772f998c7b8ec5eace16eb76c1b0"} Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.272960 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xg2ph" event={"ID":"bcc5808f-77c2-4e08-9feb-f8f27081f7c8","Type":"ContainerStarted","Data":"4b56d0ceada9215b3a04196e462ad1e99d6073271dff665fb3af2236620ea29d"} Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.296671 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6mf6" event={"ID":"4c9f7486-cff2-4a61-8c5e-c71977aab921","Type":"ContainerStarted","Data":"45d2426f9da735749b52cf51ca8bed88984aa43843e07437f894fb765bca151c"} Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.305106 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f","Type":"ContainerStarted","Data":"06538ee2dc5b6dae685e7fee74aa0d771443e642b8645f6765c991abfb713b78"} Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.329471 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-r6mf6" podStartSLOduration=5.767756404 podStartE2EDuration="21.329450824s" podCreationTimestamp="2026-02-14 14:13:38 +0000 UTC" firstStartedPulling="2026-02-14 14:13:42.498181886 +0000 UTC m=+1294.524171367" lastFinishedPulling="2026-02-14 14:13:58.059876316 +0000 UTC m=+1310.085865787" observedRunningTime="2026-02-14 14:13:59.325420464 +0000 UTC m=+1311.351409935" watchObservedRunningTime="2026-02-14 14:13:59.329450824 +0000 UTC m=+1311.355440295" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.366527 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.368787 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c52b4e4a-a039-4845-bf9f-2855ce742360\") pod \"prometheus-metric-storage-0\" (UID: \"7cdc1f12-6f04-4860-9536-32178d28e2b7\") " pod="openstack/prometheus-metric-storage-0" Feb 14 14:13:59 crc kubenswrapper[4750]: I0214 14:13:59.542564 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 14:14:00 crc kubenswrapper[4750]: W0214 14:14:00.445421 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod150b6abc_5c50_4ece_b665_7003ecf481f7.slice/crio-6e83f4d088c50bbdd847cba0d10692845cdf71eea66ef07eaa437915c29c7bd6 WatchSource:0}: Error finding container 6e83f4d088c50bbdd847cba0d10692845cdf71eea66ef07eaa437915c29c7bd6: Status 404 returned error can't find the container with id 6e83f4d088c50bbdd847cba0d10692845cdf71eea66ef07eaa437915c29c7bd6 Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.449900 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4dn6h-config-4b6rh"] Feb 14 14:14:00 crc kubenswrapper[4750]: W0214 14:14:00.559379 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cdc1f12_6f04_4860_9536_32178d28e2b7.slice/crio-be4af3b2cf2897732ab6d9561b0471f08c574a3decc53c267fa56db563f69485 WatchSource:0}: Error finding container be4af3b2cf2897732ab6d9561b0471f08c574a3decc53c267fa56db563f69485: Status 404 returned error can't find the container with id be4af3b2cf2897732ab6d9561b0471f08c574a3decc53c267fa56db563f69485 Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.561263 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.618696 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xg2ph" Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.742331 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-operator-scripts\") pod \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\" (UID: \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\") " Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.742437 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzgwf\" (UniqueName: \"kubernetes.io/projected/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-kube-api-access-mzgwf\") pod \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\" (UID: \"bcc5808f-77c2-4e08-9feb-f8f27081f7c8\") " Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.742735 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcc5808f-77c2-4e08-9feb-f8f27081f7c8" (UID: "bcc5808f-77c2-4e08-9feb-f8f27081f7c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.743079 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.748288 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-kube-api-access-mzgwf" (OuterVolumeSpecName: "kube-api-access-mzgwf") pod "bcc5808f-77c2-4e08-9feb-f8f27081f7c8" (UID: "bcc5808f-77c2-4e08-9feb-f8f27081f7c8"). InnerVolumeSpecName "kube-api-access-mzgwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.756099 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012f3d60-0015-4f72-b414-8eb4f633f0f3" path="/var/lib/kubelet/pods/012f3d60-0015-4f72-b414-8eb4f633f0f3/volumes" Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.757822 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e14714d-89e0-4029-9af7-6762f1b284b8" path="/var/lib/kubelet/pods/7e14714d-89e0-4029-9af7-6762f1b284b8/volumes" Feb 14 14:14:00 crc kubenswrapper[4750]: I0214 14:14:00.844944 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzgwf\" (UniqueName: \"kubernetes.io/projected/bcc5808f-77c2-4e08-9feb-f8f27081f7c8-kube-api-access-mzgwf\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:01 crc kubenswrapper[4750]: I0214 14:14:01.322134 4750 generic.go:334] "Generic (PLEG): container finished" podID="150b6abc-5c50-4ece-b665-7003ecf481f7" containerID="3db342e71128ca6e06e6bf30876da25e35205ef9b1172ae15b6c1179f97fe921" exitCode=0 Feb 14 14:14:01 crc kubenswrapper[4750]: I0214 14:14:01.322232 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h-config-4b6rh" event={"ID":"150b6abc-5c50-4ece-b665-7003ecf481f7","Type":"ContainerDied","Data":"3db342e71128ca6e06e6bf30876da25e35205ef9b1172ae15b6c1179f97fe921"} Feb 14 14:14:01 crc kubenswrapper[4750]: I0214 14:14:01.322434 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h-config-4b6rh" event={"ID":"150b6abc-5c50-4ece-b665-7003ecf481f7","Type":"ContainerStarted","Data":"6e83f4d088c50bbdd847cba0d10692845cdf71eea66ef07eaa437915c29c7bd6"} Feb 14 14:14:01 crc kubenswrapper[4750]: I0214 14:14:01.323788 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7cdc1f12-6f04-4860-9536-32178d28e2b7","Type":"ContainerStarted","Data":"be4af3b2cf2897732ab6d9561b0471f08c574a3decc53c267fa56db563f69485"} Feb 14 14:14:01 crc kubenswrapper[4750]: I0214 14:14:01.325334 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f","Type":"ContainerStarted","Data":"e00105401d705ec8d9977d0c7692f62c2dd77119e897eb1e21ce7300703f8605"} Feb 14 14:14:01 crc kubenswrapper[4750]: I0214 14:14:01.327008 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xg2ph" event={"ID":"bcc5808f-77c2-4e08-9feb-f8f27081f7c8","Type":"ContainerDied","Data":"4b56d0ceada9215b3a04196e462ad1e99d6073271dff665fb3af2236620ea29d"} Feb 14 14:14:01 crc kubenswrapper[4750]: I0214 14:14:01.327045 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b56d0ceada9215b3a04196e462ad1e99d6073271dff665fb3af2236620ea29d" Feb 14 14:14:01 crc kubenswrapper[4750]: I0214 14:14:01.327051 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xg2ph" Feb 14 14:14:01 crc kubenswrapper[4750]: I0214 14:14:01.371569 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=8.649154607 podStartE2EDuration="10.371546794s" podCreationTimestamp="2026-02-14 14:13:51 +0000 UTC" firstStartedPulling="2026-02-14 14:13:58.38843019 +0000 UTC m=+1310.414419671" lastFinishedPulling="2026-02-14 14:14:00.110822377 +0000 UTC m=+1312.136811858" observedRunningTime="2026-02-14 14:14:01.361061337 +0000 UTC m=+1313.387050818" watchObservedRunningTime="2026-02-14 14:14:01.371546794 +0000 UTC m=+1313.397536265" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.006024 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.015047 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e623022c-0cda-4463-b5e1-3157a1f8c1c1-etc-swift\") pod \"swift-storage-0\" (UID: \"e623022c-0cda-4463-b5e1-3157a1f8c1c1\") " pod="openstack/swift-storage-0" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.065225 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.209220 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run-ovn\") pod \"150b6abc-5c50-4ece-b665-7003ecf481f7\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.209436 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "150b6abc-5c50-4ece-b665-7003ecf481f7" (UID: "150b6abc-5c50-4ece-b665-7003ecf481f7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.209633 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-additional-scripts\") pod \"150b6abc-5c50-4ece-b665-7003ecf481f7\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.209734 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-scripts\") pod \"150b6abc-5c50-4ece-b665-7003ecf481f7\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.209786 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzhlg\" (UniqueName: \"kubernetes.io/projected/150b6abc-5c50-4ece-b665-7003ecf481f7-kube-api-access-gzhlg\") pod \"150b6abc-5c50-4ece-b665-7003ecf481f7\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.209875 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-log-ovn\") pod \"150b6abc-5c50-4ece-b665-7003ecf481f7\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.209937 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run\") pod \"150b6abc-5c50-4ece-b665-7003ecf481f7\" (UID: \"150b6abc-5c50-4ece-b665-7003ecf481f7\") " Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.210011 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "150b6abc-5c50-4ece-b665-7003ecf481f7" (UID: "150b6abc-5c50-4ece-b665-7003ecf481f7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.210124 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run" (OuterVolumeSpecName: "var-run") pod "150b6abc-5c50-4ece-b665-7003ecf481f7" (UID: "150b6abc-5c50-4ece-b665-7003ecf481f7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.210387 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "150b6abc-5c50-4ece-b665-7003ecf481f7" (UID: "150b6abc-5c50-4ece-b665-7003ecf481f7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.210563 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-scripts" (OuterVolumeSpecName: "scripts") pod "150b6abc-5c50-4ece-b665-7003ecf481f7" (UID: "150b6abc-5c50-4ece-b665-7003ecf481f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.210937 4750 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.210957 4750 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.210968 4750 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.210980 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150b6abc-5c50-4ece-b665-7003ecf481f7-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.210989 4750 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150b6abc-5c50-4ece-b665-7003ecf481f7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.213282 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150b6abc-5c50-4ece-b665-7003ecf481f7-kube-api-access-gzhlg" (OuterVolumeSpecName: "kube-api-access-gzhlg") pod "150b6abc-5c50-4ece-b665-7003ecf481f7" (UID: "150b6abc-5c50-4ece-b665-7003ecf481f7"). InnerVolumeSpecName "kube-api-access-gzhlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.271040 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.314722 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzhlg\" (UniqueName: \"kubernetes.io/projected/150b6abc-5c50-4ece-b665-7003ecf481f7-kube-api-access-gzhlg\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.375322 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h-config-4b6rh" event={"ID":"150b6abc-5c50-4ece-b665-7003ecf481f7","Type":"ContainerDied","Data":"6e83f4d088c50bbdd847cba0d10692845cdf71eea66ef07eaa437915c29c7bd6"} Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.375369 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e83f4d088c50bbdd847cba0d10692845cdf71eea66ef07eaa437915c29c7bd6" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.375379 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-4b6rh" Feb 14 14:14:03 crc kubenswrapper[4750]: I0214 14:14:03.893754 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.146946 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4dn6h-config-4b6rh"] Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.161667 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4dn6h-config-4b6rh"] Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.192315 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4dn6h-config-l6chq"] Feb 14 14:14:04 crc kubenswrapper[4750]: E0214 14:14:04.192698 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150b6abc-5c50-4ece-b665-7003ecf481f7" containerName="ovn-config" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.192715 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="150b6abc-5c50-4ece-b665-7003ecf481f7" containerName="ovn-config" Feb 14 14:14:04 crc kubenswrapper[4750]: E0214 14:14:04.192752 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc5808f-77c2-4e08-9feb-f8f27081f7c8" containerName="mariadb-account-create-update" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.192758 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc5808f-77c2-4e08-9feb-f8f27081f7c8" containerName="mariadb-account-create-update" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.192956 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="150b6abc-5c50-4ece-b665-7003ecf481f7" containerName="ovn-config" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.192970 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc5808f-77c2-4e08-9feb-f8f27081f7c8" containerName="mariadb-account-create-update" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.193665 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.197884 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.216726 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4dn6h-config-l6chq"] Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.235436 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.235495 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-additional-scripts\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.235634 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-scripts\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.235688 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnrwh\" (UniqueName: \"kubernetes.io/projected/0754481d-22ac-4078-8ce9-a7d86475725b-kube-api-access-gnrwh\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.235910 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run-ovn\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.235943 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-log-ovn\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.336880 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run-ovn\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.337231 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-log-ovn\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.337279 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.337272 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run-ovn\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.337319 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-additional-scripts\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.337365 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-scripts\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.337379 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.337388 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnrwh\" (UniqueName: \"kubernetes.io/projected/0754481d-22ac-4078-8ce9-a7d86475725b-kube-api-access-gnrwh\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.337447 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-log-ovn\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.338597 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-additional-scripts\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.339643 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-scripts\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.361239 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnrwh\" (UniqueName: \"kubernetes.io/projected/0754481d-22ac-4078-8ce9-a7d86475725b-kube-api-access-gnrwh\") pod \"ovn-controller-4dn6h-config-l6chq\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.387389 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7cdc1f12-6f04-4860-9536-32178d28e2b7","Type":"ContainerStarted","Data":"ce6f22a86bd2dd1b8c47df560dc8b6d77569792ab6053d515c871eb0343de5ee"} Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.389263 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"e1718a68259405e88fa6fc0fa32ce91dc924e2bd5c8d7160a3647c156ecb04c2"} Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.523738 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.754541 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150b6abc-5c50-4ece-b665-7003ecf481f7" path="/var/lib/kubelet/pods/150b6abc-5c50-4ece-b665-7003ecf481f7/volumes" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.963252 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 14 14:14:04 crc kubenswrapper[4750]: I0214 14:14:04.992302 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4dn6h-config-l6chq"] Feb 14 14:14:05 crc kubenswrapper[4750]: I0214 14:14:05.176724 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 14:14:05 crc kubenswrapper[4750]: I0214 14:14:05.216517 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 14 14:14:05 crc kubenswrapper[4750]: I0214 14:14:05.401438 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h-config-l6chq" event={"ID":"0754481d-22ac-4078-8ce9-a7d86475725b","Type":"ContainerStarted","Data":"b425e4bba60a44ff250544e0095c5da63f1e2cc4885d5306dfef2170a339eb27"} Feb 14 14:14:06 crc kubenswrapper[4750]: I0214 14:14:06.411237 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"0fb1852b3a81032714f9c6752866ce8aa9d8b755fe6df13c1dbaff1c99e5af96"} Feb 14 14:14:06 crc kubenswrapper[4750]: I0214 14:14:06.411723 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"40c92812096fe387514c241b66c78fb189499c2d89a0fc36f8e58759c08e7e53"} Feb 14 14:14:06 crc kubenswrapper[4750]: I0214 14:14:06.411734 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"217f385c5152d5a0edecda00347552963569bc0437b815b1adb3073626e65eaf"} Feb 14 14:14:06 crc kubenswrapper[4750]: I0214 14:14:06.411742 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"83ab2a345b7e6305397d4bff898b8d8ac5a905ff9793e3101351f448c69ced9b"} Feb 14 14:14:06 crc kubenswrapper[4750]: I0214 14:14:06.413607 4750 generic.go:334] "Generic (PLEG): container finished" podID="0754481d-22ac-4078-8ce9-a7d86475725b" containerID="297905ad9d5ede65642c5b7344a7c596166e22f92ff212e8437b8aebf590aa07" exitCode=0 Feb 14 14:14:06 crc kubenswrapper[4750]: I0214 14:14:06.413632 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h-config-l6chq" event={"ID":"0754481d-22ac-4078-8ce9-a7d86475725b","Type":"ContainerDied","Data":"297905ad9d5ede65642c5b7344a7c596166e22f92ff212e8437b8aebf590aa07"} Feb 14 14:14:06 crc kubenswrapper[4750]: I0214 14:14:06.918461 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-j4kbh"] Feb 14 14:14:06 crc kubenswrapper[4750]: I0214 14:14:06.919997 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:06 crc kubenswrapper[4750]: I0214 14:14:06.927922 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-j4kbh"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.051162 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-5cf5-account-create-update-s64xm"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.056126 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.058019 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.077454 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-5cf5-account-create-update-s64xm"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.104767 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-operator-scripts\") pod \"heat-db-create-j4kbh\" (UID: \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\") " pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.104885 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qszml\" (UniqueName: \"kubernetes.io/projected/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-kube-api-access-qszml\") pod \"heat-db-create-j4kbh\" (UID: \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\") " pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.206980 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jjzn\" (UniqueName: \"kubernetes.io/projected/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-kube-api-access-8jjzn\") pod \"heat-5cf5-account-create-update-s64xm\" (UID: \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\") " pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.207038 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-operator-scripts\") pod \"heat-db-create-j4kbh\" (UID: \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\") " pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.207132 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qszml\" (UniqueName: \"kubernetes.io/projected/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-kube-api-access-qszml\") pod \"heat-db-create-j4kbh\" (UID: \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\") " pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.207214 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-operator-scripts\") pod \"heat-5cf5-account-create-update-s64xm\" (UID: \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\") " pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.207820 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-operator-scripts\") pod \"heat-db-create-j4kbh\" (UID: \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\") " pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.218732 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4dmfv"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.219965 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.231824 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qszml\" (UniqueName: \"kubernetes.io/projected/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-kube-api-access-qszml\") pod \"heat-db-create-j4kbh\" (UID: \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\") " pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.237045 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4dmfv"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.242632 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.270387 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6534-account-create-update-bsnpb"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.271803 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.277769 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.305216 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6534-account-create-update-bsnpb"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.309837 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-operator-scripts\") pod \"heat-5cf5-account-create-update-s64xm\" (UID: \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\") " pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.309917 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df37d4-bdf9-49e2-8286-085cd7975f98-operator-scripts\") pod \"neutron-db-create-4dmfv\" (UID: \"95df37d4-bdf9-49e2-8286-085cd7975f98\") " pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.310003 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jjzn\" (UniqueName: \"kubernetes.io/projected/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-kube-api-access-8jjzn\") pod \"heat-5cf5-account-create-update-s64xm\" (UID: \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\") " pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.310035 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfgnw\" (UniqueName: \"kubernetes.io/projected/95df37d4-bdf9-49e2-8286-085cd7975f98-kube-api-access-sfgnw\") pod \"neutron-db-create-4dmfv\" (UID: \"95df37d4-bdf9-49e2-8286-085cd7975f98\") " pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.310833 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-operator-scripts\") pod \"heat-5cf5-account-create-update-s64xm\" (UID: \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\") " pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.336871 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jjzn\" (UniqueName: \"kubernetes.io/projected/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-kube-api-access-8jjzn\") pod \"heat-5cf5-account-create-update-s64xm\" (UID: \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\") " pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.340918 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zz2gv"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.345091 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.366925 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zz2gv"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.378426 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6232-account-create-update-7gdfp"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.379771 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.384038 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.398501 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.410968 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6232-account-create-update-7gdfp"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.411945 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df37d4-bdf9-49e2-8286-085cd7975f98-operator-scripts\") pod \"neutron-db-create-4dmfv\" (UID: \"95df37d4-bdf9-49e2-8286-085cd7975f98\") " pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.412041 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c27ae574-7796-42fe-854a-8a82e9936e2d-operator-scripts\") pod \"cinder-6534-account-create-update-bsnpb\" (UID: \"c27ae574-7796-42fe-854a-8a82e9936e2d\") " pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.412085 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfgnw\" (UniqueName: \"kubernetes.io/projected/95df37d4-bdf9-49e2-8286-085cd7975f98-kube-api-access-sfgnw\") pod \"neutron-db-create-4dmfv\" (UID: \"95df37d4-bdf9-49e2-8286-085cd7975f98\") " pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.412107 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrvlg\" (UniqueName: \"kubernetes.io/projected/c27ae574-7796-42fe-854a-8a82e9936e2d-kube-api-access-mrvlg\") pod \"cinder-6534-account-create-update-bsnpb\" (UID: \"c27ae574-7796-42fe-854a-8a82e9936e2d\") " pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.412892 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df37d4-bdf9-49e2-8286-085cd7975f98-operator-scripts\") pod \"neutron-db-create-4dmfv\" (UID: \"95df37d4-bdf9-49e2-8286-085cd7975f98\") " pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.433487 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfgnw\" (UniqueName: \"kubernetes.io/projected/95df37d4-bdf9-49e2-8286-085cd7975f98-kube-api-access-sfgnw\") pod \"neutron-db-create-4dmfv\" (UID: \"95df37d4-bdf9-49e2-8286-085cd7975f98\") " pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.446750 4750 generic.go:334] "Generic (PLEG): container finished" podID="4c9f7486-cff2-4a61-8c5e-c71977aab921" containerID="45d2426f9da735749b52cf51ca8bed88984aa43843e07437f894fb765bca151c" exitCode=0 Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.446837 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6mf6" event={"ID":"4c9f7486-cff2-4a61-8c5e-c71977aab921","Type":"ContainerDied","Data":"45d2426f9da735749b52cf51ca8bed88984aa43843e07437f894fb765bca151c"} Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.497085 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-b75n6"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.498575 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.500996 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.501578 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vtfpt" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.501707 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.501820 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.515012 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mk7q\" (UniqueName: \"kubernetes.io/projected/188954af-a7be-4dc4-9c84-eb83c5c39d7e-kube-api-access-9mk7q\") pod \"cinder-db-create-zz2gv\" (UID: \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\") " pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.515072 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr2rt\" (UniqueName: \"kubernetes.io/projected/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-kube-api-access-vr2rt\") pod \"neutron-6232-account-create-update-7gdfp\" (UID: \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\") " pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.515138 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-operator-scripts\") pod \"neutron-6232-account-create-update-7gdfp\" (UID: \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\") " pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.515205 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188954af-a7be-4dc4-9c84-eb83c5c39d7e-operator-scripts\") pod \"cinder-db-create-zz2gv\" (UID: \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\") " pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.515242 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c27ae574-7796-42fe-854a-8a82e9936e2d-operator-scripts\") pod \"cinder-6534-account-create-update-bsnpb\" (UID: \"c27ae574-7796-42fe-854a-8a82e9936e2d\") " pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.515299 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrvlg\" (UniqueName: \"kubernetes.io/projected/c27ae574-7796-42fe-854a-8a82e9936e2d-kube-api-access-mrvlg\") pod \"cinder-6534-account-create-update-bsnpb\" (UID: \"c27ae574-7796-42fe-854a-8a82e9936e2d\") " pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.516314 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c27ae574-7796-42fe-854a-8a82e9936e2d-operator-scripts\") pod \"cinder-6534-account-create-update-bsnpb\" (UID: \"c27ae574-7796-42fe-854a-8a82e9936e2d\") " pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.523799 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b75n6"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.537971 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrvlg\" (UniqueName: \"kubernetes.io/projected/c27ae574-7796-42fe-854a-8a82e9936e2d-kube-api-access-mrvlg\") pod \"cinder-6534-account-create-update-bsnpb\" (UID: \"c27ae574-7796-42fe-854a-8a82e9936e2d\") " pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.616472 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-config-data\") pod \"keystone-db-sync-b75n6\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.616529 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mk7q\" (UniqueName: \"kubernetes.io/projected/188954af-a7be-4dc4-9c84-eb83c5c39d7e-kube-api-access-9mk7q\") pod \"cinder-db-create-zz2gv\" (UID: \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\") " pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.616567 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rt\" (UniqueName: \"kubernetes.io/projected/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-kube-api-access-vr2rt\") pod \"neutron-6232-account-create-update-7gdfp\" (UID: \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\") " pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.616617 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-operator-scripts\") pod \"neutron-6232-account-create-update-7gdfp\" (UID: \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\") " pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.616666 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vv5g\" (UniqueName: \"kubernetes.io/projected/5f9044d3-72ef-43b1-b353-a02d6a3538b7-kube-api-access-4vv5g\") pod \"keystone-db-sync-b75n6\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.616687 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188954af-a7be-4dc4-9c84-eb83c5c39d7e-operator-scripts\") pod \"cinder-db-create-zz2gv\" (UID: \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\") " pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.616757 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-combined-ca-bundle\") pod \"keystone-db-sync-b75n6\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.617450 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188954af-a7be-4dc4-9c84-eb83c5c39d7e-operator-scripts\") pod \"cinder-db-create-zz2gv\" (UID: \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\") " pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.617495 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-operator-scripts\") pod \"neutron-6232-account-create-update-7gdfp\" (UID: \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\") " pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.618218 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hrsll"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.620863 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.635386 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hrsll"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.638813 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mk7q\" (UniqueName: \"kubernetes.io/projected/188954af-a7be-4dc4-9c84-eb83c5c39d7e-kube-api-access-9mk7q\") pod \"cinder-db-create-zz2gv\" (UID: \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\") " pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.639353 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr2rt\" (UniqueName: \"kubernetes.io/projected/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-kube-api-access-vr2rt\") pod \"neutron-6232-account-create-update-7gdfp\" (UID: \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\") " pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.712289 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-dfac-account-create-update-lxm7n"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.713642 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.715677 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.716029 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.718361 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vv5g\" (UniqueName: \"kubernetes.io/projected/5f9044d3-72ef-43b1-b353-a02d6a3538b7-kube-api-access-4vv5g\") pod \"keystone-db-sync-b75n6\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.718449 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkqf\" (UniqueName: \"kubernetes.io/projected/409d5a14-70a1-4109-894d-84c2b04e297d-kube-api-access-dmkqf\") pod \"barbican-db-create-hrsll\" (UID: \"409d5a14-70a1-4109-894d-84c2b04e297d\") " pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.718500 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409d5a14-70a1-4109-894d-84c2b04e297d-operator-scripts\") pod \"barbican-db-create-hrsll\" (UID: \"409d5a14-70a1-4109-894d-84c2b04e297d\") " pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.718533 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-combined-ca-bundle\") pod \"keystone-db-sync-b75n6\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.718579 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-config-data\") pod \"keystone-db-sync-b75n6\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.722566 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-dfac-account-create-update-lxm7n"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.733370 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-config-data\") pod \"keystone-db-sync-b75n6\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.738935 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-combined-ca-bundle\") pod \"keystone-db-sync-b75n6\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.742573 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vv5g\" (UniqueName: \"kubernetes.io/projected/5f9044d3-72ef-43b1-b353-a02d6a3538b7-kube-api-access-4vv5g\") pod \"keystone-db-sync-b75n6\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.748292 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.784699 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.791381 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.820207 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409d5a14-70a1-4109-894d-84c2b04e297d-operator-scripts\") pod \"barbican-db-create-hrsll\" (UID: \"409d5a14-70a1-4109-894d-84c2b04e297d\") " pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.820322 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49101baa-13ea-46fb-8e04-403d10c5e7ce-operator-scripts\") pod \"barbican-dfac-account-create-update-lxm7n\" (UID: \"49101baa-13ea-46fb-8e04-403d10c5e7ce\") " pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.820453 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh2pz\" (UniqueName: \"kubernetes.io/projected/49101baa-13ea-46fb-8e04-403d10c5e7ce-kube-api-access-xh2pz\") pod \"barbican-dfac-account-create-update-lxm7n\" (UID: \"49101baa-13ea-46fb-8e04-403d10c5e7ce\") " pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.820594 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkqf\" (UniqueName: \"kubernetes.io/projected/409d5a14-70a1-4109-894d-84c2b04e297d-kube-api-access-dmkqf\") pod \"barbican-db-create-hrsll\" (UID: \"409d5a14-70a1-4109-894d-84c2b04e297d\") " pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.821887 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409d5a14-70a1-4109-894d-84c2b04e297d-operator-scripts\") pod \"barbican-db-create-hrsll\" (UID: \"409d5a14-70a1-4109-894d-84c2b04e297d\") " pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.829896 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.846604 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkqf\" (UniqueName: \"kubernetes.io/projected/409d5a14-70a1-4109-894d-84c2b04e297d-kube-api-access-dmkqf\") pod \"barbican-db-create-hrsll\" (UID: \"409d5a14-70a1-4109-894d-84c2b04e297d\") " pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.863677 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-j4kbh"] Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.922475 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49101baa-13ea-46fb-8e04-403d10c5e7ce-operator-scripts\") pod \"barbican-dfac-account-create-update-lxm7n\" (UID: \"49101baa-13ea-46fb-8e04-403d10c5e7ce\") " pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.922600 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh2pz\" (UniqueName: \"kubernetes.io/projected/49101baa-13ea-46fb-8e04-403d10c5e7ce-kube-api-access-xh2pz\") pod \"barbican-dfac-account-create-update-lxm7n\" (UID: \"49101baa-13ea-46fb-8e04-403d10c5e7ce\") " pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.923849 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49101baa-13ea-46fb-8e04-403d10c5e7ce-operator-scripts\") pod \"barbican-dfac-account-create-update-lxm7n\" (UID: \"49101baa-13ea-46fb-8e04-403d10c5e7ce\") " pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.941972 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh2pz\" (UniqueName: \"kubernetes.io/projected/49101baa-13ea-46fb-8e04-403d10c5e7ce-kube-api-access-xh2pz\") pod \"barbican-dfac-account-create-update-lxm7n\" (UID: \"49101baa-13ea-46fb-8e04-403d10c5e7ce\") " pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.945655 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:07 crc kubenswrapper[4750]: I0214 14:14:07.967097 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.040057 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.092828 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-5cf5-account-create-update-s64xm"] Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.127725 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnrwh\" (UniqueName: \"kubernetes.io/projected/0754481d-22ac-4078-8ce9-a7d86475725b-kube-api-access-gnrwh\") pod \"0754481d-22ac-4078-8ce9-a7d86475725b\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.127885 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-scripts\") pod \"0754481d-22ac-4078-8ce9-a7d86475725b\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.127970 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-log-ovn\") pod \"0754481d-22ac-4078-8ce9-a7d86475725b\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.128009 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run\") pod \"0754481d-22ac-4078-8ce9-a7d86475725b\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.128161 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-additional-scripts\") pod \"0754481d-22ac-4078-8ce9-a7d86475725b\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.128190 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run-ovn\") pod \"0754481d-22ac-4078-8ce9-a7d86475725b\" (UID: \"0754481d-22ac-4078-8ce9-a7d86475725b\") " Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.128686 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0754481d-22ac-4078-8ce9-a7d86475725b" (UID: "0754481d-22ac-4078-8ce9-a7d86475725b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.128722 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0754481d-22ac-4078-8ce9-a7d86475725b" (UID: "0754481d-22ac-4078-8ce9-a7d86475725b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.128740 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run" (OuterVolumeSpecName: "var-run") pod "0754481d-22ac-4078-8ce9-a7d86475725b" (UID: "0754481d-22ac-4078-8ce9-a7d86475725b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.134087 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0754481d-22ac-4078-8ce9-a7d86475725b" (UID: "0754481d-22ac-4078-8ce9-a7d86475725b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.141782 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0754481d-22ac-4078-8ce9-a7d86475725b-kube-api-access-gnrwh" (OuterVolumeSpecName: "kube-api-access-gnrwh") pod "0754481d-22ac-4078-8ce9-a7d86475725b" (UID: "0754481d-22ac-4078-8ce9-a7d86475725b"). InnerVolumeSpecName "kube-api-access-gnrwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.235991 4750 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.236339 4750 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.236349 4750 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.236358 4750 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0754481d-22ac-4078-8ce9-a7d86475725b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.236366 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnrwh\" (UniqueName: \"kubernetes.io/projected/0754481d-22ac-4078-8ce9-a7d86475725b-kube-api-access-gnrwh\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.259184 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-scripts" (OuterVolumeSpecName: "scripts") pod "0754481d-22ac-4078-8ce9-a7d86475725b" (UID: "0754481d-22ac-4078-8ce9-a7d86475725b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.338148 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0754481d-22ac-4078-8ce9-a7d86475725b-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.456625 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4dn6h-config-l6chq" event={"ID":"0754481d-22ac-4078-8ce9-a7d86475725b","Type":"ContainerDied","Data":"b425e4bba60a44ff250544e0095c5da63f1e2cc4885d5306dfef2170a339eb27"} Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.457517 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b425e4bba60a44ff250544e0095c5da63f1e2cc4885d5306dfef2170a339eb27" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.457626 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4dn6h-config-l6chq" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.470381 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5cf5-account-create-update-s64xm" event={"ID":"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d","Type":"ContainerStarted","Data":"268a64d34fe82599a8c22b10e9849656a15d4306c8e553a2643dd1c49692ab0f"} Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.480400 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"55bee7a70f4f4e675305f2c84a27f8ea12802ba771a364b8112f8d32e953fd12"} Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.480438 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"c87536c3f29f5229663a0c7483e3b4362da89d2b8cb17cdc908e3ae2d93a4d85"} Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.482533 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-j4kbh" event={"ID":"7f183dd0-25fa-4b38-b6a0-d7f670aa5433","Type":"ContainerStarted","Data":"a423239daf33cf33dc927876ec3d3109fe3f9489b7f1558a4871bea4458e39d8"} Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.651992 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6534-account-create-update-bsnpb"] Feb 14 14:14:08 crc kubenswrapper[4750]: W0214 14:14:08.654947 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc27ae574_7796_42fe_854a_8a82e9936e2d.slice/crio-054426f2eb8d911a80282d27a55598eb0c6954b0df7e144efcb3e08aab7d606c WatchSource:0}: Error finding container 054426f2eb8d911a80282d27a55598eb0c6954b0df7e144efcb3e08aab7d606c: Status 404 returned error can't find the container with id 054426f2eb8d911a80282d27a55598eb0c6954b0df7e144efcb3e08aab7d606c Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.664665 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.679881 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4dmfv"] Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.694284 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zz2gv"] Feb 14 14:14:08 crc kubenswrapper[4750]: W0214 14:14:08.695400 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod188954af_a7be_4dc4_9c84_eb83c5c39d7e.slice/crio-eca266b09e3eb4b64c1072aca005dcdfcdcf7326cf26b86a2abfecebedeeb0df WatchSource:0}: Error finding container eca266b09e3eb4b64c1072aca005dcdfcdcf7326cf26b86a2abfecebedeeb0df: Status 404 returned error can't find the container with id eca266b09e3eb4b64c1072aca005dcdfcdcf7326cf26b86a2abfecebedeeb0df Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.788694 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 14 14:14:08 crc kubenswrapper[4750]: W0214 14:14:08.789095 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod409d5a14_70a1_4109_894d_84c2b04e297d.slice/crio-b6b9a9f60b562871f2edae1c69c938955c75bf43f39982319962d5cfa03dc3ae WatchSource:0}: Error finding container b6b9a9f60b562871f2edae1c69c938955c75bf43f39982319962d5cfa03dc3ae: Status 404 returned error can't find the container with id b6b9a9f60b562871f2edae1c69c938955c75bf43f39982319962d5cfa03dc3ae Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.789767 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6232-account-create-update-7gdfp"] Feb 14 14:14:08 crc kubenswrapper[4750]: I0214 14:14:08.789801 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hrsll"] Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.063258 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4dn6h-config-l6chq"] Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.067437 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4dn6h-config-l6chq"] Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.226883 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-dfac-account-create-update-lxm7n"] Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.242736 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b75n6"] Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.247144 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.360987 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6mf6" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.467627 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-config-data\") pod \"4c9f7486-cff2-4a61-8c5e-c71977aab921\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.467678 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crv7d\" (UniqueName: \"kubernetes.io/projected/4c9f7486-cff2-4a61-8c5e-c71977aab921-kube-api-access-crv7d\") pod \"4c9f7486-cff2-4a61-8c5e-c71977aab921\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.467698 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-combined-ca-bundle\") pod \"4c9f7486-cff2-4a61-8c5e-c71977aab921\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.467846 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-db-sync-config-data\") pod \"4c9f7486-cff2-4a61-8c5e-c71977aab921\" (UID: \"4c9f7486-cff2-4a61-8c5e-c71977aab921\") " Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.473436 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9f7486-cff2-4a61-8c5e-c71977aab921-kube-api-access-crv7d" (OuterVolumeSpecName: "kube-api-access-crv7d") pod "4c9f7486-cff2-4a61-8c5e-c71977aab921" (UID: "4c9f7486-cff2-4a61-8c5e-c71977aab921"). InnerVolumeSpecName "kube-api-access-crv7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.474684 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4c9f7486-cff2-4a61-8c5e-c71977aab921" (UID: "4c9f7486-cff2-4a61-8c5e-c71977aab921"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.499007 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c9f7486-cff2-4a61-8c5e-c71977aab921" (UID: "4c9f7486-cff2-4a61-8c5e-c71977aab921"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.500363 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b75n6" event={"ID":"5f9044d3-72ef-43b1-b353-a02d6a3538b7","Type":"ContainerStarted","Data":"057e68ba9f5054d250523fd49f02f3071a8ba3dd233513a3c621b8e5a9469d5a"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.512804 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"6db38adbb86966a0d8bb23c32121d44653dc2994f85fdd36e6449c8ef213730f"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.518615 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hrsll" event={"ID":"409d5a14-70a1-4109-894d-84c2b04e297d","Type":"ContainerStarted","Data":"afe0f83fe8f496efd9861f2f71616b04d396a9d50fbadf0a88b6c53b5a8410bd"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.518674 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hrsll" event={"ID":"409d5a14-70a1-4109-894d-84c2b04e297d","Type":"ContainerStarted","Data":"b6b9a9f60b562871f2edae1c69c938955c75bf43f39982319962d5cfa03dc3ae"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.527529 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6mf6" event={"ID":"4c9f7486-cff2-4a61-8c5e-c71977aab921","Type":"ContainerDied","Data":"146471f1a834cf4e746ece1784c93f07ff4278b2e148dcaba0386d01defdf766"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.527579 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6mf6" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.527581 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="146471f1a834cf4e746ece1784c93f07ff4278b2e148dcaba0386d01defdf766" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.531538 4750 generic.go:334] "Generic (PLEG): container finished" podID="7cdc1f12-6f04-4860-9536-32178d28e2b7" containerID="ce6f22a86bd2dd1b8c47df560dc8b6d77569792ab6053d515c871eb0343de5ee" exitCode=0 Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.531612 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7cdc1f12-6f04-4860-9536-32178d28e2b7","Type":"ContainerDied","Data":"ce6f22a86bd2dd1b8c47df560dc8b6d77569792ab6053d515c871eb0343de5ee"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.539731 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-config-data" (OuterVolumeSpecName: "config-data") pod "4c9f7486-cff2-4a61-8c5e-c71977aab921" (UID: "4c9f7486-cff2-4a61-8c5e-c71977aab921"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.542597 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dfac-account-create-update-lxm7n" event={"ID":"49101baa-13ea-46fb-8e04-403d10c5e7ce","Type":"ContainerStarted","Data":"c103ee23c892239320b1a0edbed7ca6308fe2a8e23d369afe4194bf3812ae0ca"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.542784 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-hrsll" podStartSLOduration=2.5426395939999997 podStartE2EDuration="2.542639594s" podCreationTimestamp="2026-02-14 14:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:09.53441628 +0000 UTC m=+1321.560405761" watchObservedRunningTime="2026-02-14 14:14:09.542639594 +0000 UTC m=+1321.568629075" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.546711 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4dmfv" event={"ID":"95df37d4-bdf9-49e2-8286-085cd7975f98","Type":"ContainerStarted","Data":"801486fd391c57b47eb5ec550fe3a34d8c5392070940de5b70f7d2cca0817627"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.546749 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4dmfv" event={"ID":"95df37d4-bdf9-49e2-8286-085cd7975f98","Type":"ContainerStarted","Data":"c0182b811c9d65437b36fdf1d7d72bbe391aac708e00ddf192483c6d1205534d"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.555958 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-j4kbh" event={"ID":"7f183dd0-25fa-4b38-b6a0-d7f670aa5433","Type":"ContainerDied","Data":"4ebe93ac751bed7807ecb03304692224dd4168a252dfe1df194c7456a923caf6"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.558805 4750 generic.go:334] "Generic (PLEG): container finished" podID="7f183dd0-25fa-4b38-b6a0-d7f670aa5433" containerID="4ebe93ac751bed7807ecb03304692224dd4168a252dfe1df194c7456a923caf6" exitCode=0 Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.562701 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6232-account-create-update-7gdfp" event={"ID":"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e","Type":"ContainerStarted","Data":"dd0601f4671a6b55ef25af865341d59a1d8d068e9982a8db1fa4e42902b24192"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.562746 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6232-account-create-update-7gdfp" event={"ID":"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e","Type":"ContainerStarted","Data":"bc84795dae0f9a2bac326234325c96dec760ca53dfe99445bd55ff1bef923c57"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.569049 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zz2gv" event={"ID":"188954af-a7be-4dc4-9c84-eb83c5c39d7e","Type":"ContainerStarted","Data":"ce29fdb2ccd3daeea6ab4523989850d1e6927522063615d2a1e59900b60e1bd5"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.569096 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zz2gv" event={"ID":"188954af-a7be-4dc4-9c84-eb83c5c39d7e","Type":"ContainerStarted","Data":"eca266b09e3eb4b64c1072aca005dcdfcdcf7326cf26b86a2abfecebedeeb0df"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.570788 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.570814 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crv7d\" (UniqueName: \"kubernetes.io/projected/4c9f7486-cff2-4a61-8c5e-c71977aab921-kube-api-access-crv7d\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.570829 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.570841 4750 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9f7486-cff2-4a61-8c5e-c71977aab921-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.574767 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5cf5-account-create-update-s64xm" event={"ID":"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d","Type":"ContainerStarted","Data":"98069db1af30341072d6aad80f3343c8c2406960fae346c05f58f998c30fa29e"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.585486 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6534-account-create-update-bsnpb" event={"ID":"c27ae574-7796-42fe-854a-8a82e9936e2d","Type":"ContainerStarted","Data":"00ad5dc7d5e2c3d0f42d55c283270b07316b132fab2955495f0b2fc745aa9b81"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.585536 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6534-account-create-update-bsnpb" event={"ID":"c27ae574-7796-42fe-854a-8a82e9936e2d","Type":"ContainerStarted","Data":"054426f2eb8d911a80282d27a55598eb0c6954b0df7e144efcb3e08aab7d606c"} Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.600987 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-4dmfv" podStartSLOduration=2.600968428 podStartE2EDuration="2.600968428s" podCreationTimestamp="2026-02-14 14:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:09.579357807 +0000 UTC m=+1321.605347298" watchObservedRunningTime="2026-02-14 14:14:09.600968428 +0000 UTC m=+1321.626957909" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.621866 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-5cf5-account-create-update-s64xm" podStartSLOduration=2.621827968 podStartE2EDuration="2.621827968s" podCreationTimestamp="2026-02-14 14:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:09.613213862 +0000 UTC m=+1321.639203343" watchObservedRunningTime="2026-02-14 14:14:09.621827968 +0000 UTC m=+1321.647817439" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.650994 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6232-account-create-update-7gdfp" podStartSLOduration=2.650967353 podStartE2EDuration="2.650967353s" podCreationTimestamp="2026-02-14 14:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:09.64718364 +0000 UTC m=+1321.673173121" watchObservedRunningTime="2026-02-14 14:14:09.650967353 +0000 UTC m=+1321.676956844" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.673651 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-zz2gv" podStartSLOduration=2.673630622 podStartE2EDuration="2.673630622s" podCreationTimestamp="2026-02-14 14:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:09.662090667 +0000 UTC m=+1321.688080148" watchObservedRunningTime="2026-02-14 14:14:09.673630622 +0000 UTC m=+1321.699620093" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.685785 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6534-account-create-update-bsnpb" podStartSLOduration=2.685767154 podStartE2EDuration="2.685767154s" podCreationTimestamp="2026-02-14 14:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:09.68268571 +0000 UTC m=+1321.708675201" watchObservedRunningTime="2026-02-14 14:14:09.685767154 +0000 UTC m=+1321.711756635" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.922242 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-7t9pn"] Feb 14 14:14:09 crc kubenswrapper[4750]: E0214 14:14:09.922763 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9f7486-cff2-4a61-8c5e-c71977aab921" containerName="glance-db-sync" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.922783 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9f7486-cff2-4a61-8c5e-c71977aab921" containerName="glance-db-sync" Feb 14 14:14:09 crc kubenswrapper[4750]: E0214 14:14:09.922806 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0754481d-22ac-4078-8ce9-a7d86475725b" containerName="ovn-config" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.922813 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="0754481d-22ac-4078-8ce9-a7d86475725b" containerName="ovn-config" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.922992 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9f7486-cff2-4a61-8c5e-c71977aab921" containerName="glance-db-sync" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.923014 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="0754481d-22ac-4078-8ce9-a7d86475725b" containerName="ovn-config" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.924087 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:09 crc kubenswrapper[4750]: I0214 14:14:09.937456 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-7t9pn"] Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.007555 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.007654 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.007717 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2lpc\" (UniqueName: \"kubernetes.io/projected/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-kube-api-access-m2lpc\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.007783 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.007842 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-config\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.109941 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-config\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.110030 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.110087 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.110148 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2lpc\" (UniqueName: \"kubernetes.io/projected/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-kube-api-access-m2lpc\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.110201 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.111173 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-config\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.111247 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.111594 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.113941 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.146964 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2lpc\" (UniqueName: \"kubernetes.io/projected/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-kube-api-access-m2lpc\") pod \"dnsmasq-dns-5b946c75cc-7t9pn\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.271641 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.602799 4750 generic.go:334] "Generic (PLEG): container finished" podID="409d5a14-70a1-4109-894d-84c2b04e297d" containerID="afe0f83fe8f496efd9861f2f71616b04d396a9d50fbadf0a88b6c53b5a8410bd" exitCode=0 Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.603252 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hrsll" event={"ID":"409d5a14-70a1-4109-894d-84c2b04e297d","Type":"ContainerDied","Data":"afe0f83fe8f496efd9861f2f71616b04d396a9d50fbadf0a88b6c53b5a8410bd"} Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.607537 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7cdc1f12-6f04-4860-9536-32178d28e2b7","Type":"ContainerStarted","Data":"01f9ee910651aa6d896d35f5b1f90627c6984a7fa5bdb225477fdf76d0cb9293"} Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.614337 4750 generic.go:334] "Generic (PLEG): container finished" podID="49101baa-13ea-46fb-8e04-403d10c5e7ce" containerID="d3ff0b79d2f1cb2b658be7080370c62ccb4b0996018e0f5a78242a22baf930e0" exitCode=0 Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.614393 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dfac-account-create-update-lxm7n" event={"ID":"49101baa-13ea-46fb-8e04-403d10c5e7ce","Type":"ContainerDied","Data":"d3ff0b79d2f1cb2b658be7080370c62ccb4b0996018e0f5a78242a22baf930e0"} Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.625447 4750 generic.go:334] "Generic (PLEG): container finished" podID="0e3b2bdb-77bc-4578-adb8-28b3ee6e911d" containerID="98069db1af30341072d6aad80f3343c8c2406960fae346c05f58f998c30fa29e" exitCode=0 Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.625554 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5cf5-account-create-update-s64xm" event={"ID":"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d","Type":"ContainerDied","Data":"98069db1af30341072d6aad80f3343c8c2406960fae346c05f58f998c30fa29e"} Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.627587 4750 generic.go:334] "Generic (PLEG): container finished" podID="c27ae574-7796-42fe-854a-8a82e9936e2d" containerID="00ad5dc7d5e2c3d0f42d55c283270b07316b132fab2955495f0b2fc745aa9b81" exitCode=0 Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.627649 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6534-account-create-update-bsnpb" event={"ID":"c27ae574-7796-42fe-854a-8a82e9936e2d","Type":"ContainerDied","Data":"00ad5dc7d5e2c3d0f42d55c283270b07316b132fab2955495f0b2fc745aa9b81"} Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.629548 4750 generic.go:334] "Generic (PLEG): container finished" podID="6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e" containerID="dd0601f4671a6b55ef25af865341d59a1d8d068e9982a8db1fa4e42902b24192" exitCode=0 Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.630391 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6232-account-create-update-7gdfp" event={"ID":"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e","Type":"ContainerDied","Data":"dd0601f4671a6b55ef25af865341d59a1d8d068e9982a8db1fa4e42902b24192"} Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.638196 4750 generic.go:334] "Generic (PLEG): container finished" podID="95df37d4-bdf9-49e2-8286-085cd7975f98" containerID="801486fd391c57b47eb5ec550fe3a34d8c5392070940de5b70f7d2cca0817627" exitCode=0 Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.638276 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4dmfv" event={"ID":"95df37d4-bdf9-49e2-8286-085cd7975f98","Type":"ContainerDied","Data":"801486fd391c57b47eb5ec550fe3a34d8c5392070940de5b70f7d2cca0817627"} Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.654697 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"762615d9d06f1693e662af04564b641f1a43b58f36720f10c8a44612c5199b97"} Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.658459 4750 generic.go:334] "Generic (PLEG): container finished" podID="188954af-a7be-4dc4-9c84-eb83c5c39d7e" containerID="ce29fdb2ccd3daeea6ab4523989850d1e6927522063615d2a1e59900b60e1bd5" exitCode=0 Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.658514 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zz2gv" event={"ID":"188954af-a7be-4dc4-9c84-eb83c5c39d7e","Type":"ContainerDied","Data":"ce29fdb2ccd3daeea6ab4523989850d1e6927522063615d2a1e59900b60e1bd5"} Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.753726 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0754481d-22ac-4078-8ce9-a7d86475725b" path="/var/lib/kubelet/pods/0754481d-22ac-4078-8ce9-a7d86475725b/volumes" Feb 14 14:14:10 crc kubenswrapper[4750]: I0214 14:14:10.830467 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-7t9pn"] Feb 14 14:14:11 crc kubenswrapper[4750]: W0214 14:14:11.048008 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad3e5ca7_b31f_4911_92d8_a9cbb98a1f17.slice/crio-60bfa0aa9e1e6f9dcc7c3db5ba302c3860d805d3799dc2b038bb6374eddbb4c9 WatchSource:0}: Error finding container 60bfa0aa9e1e6f9dcc7c3db5ba302c3860d805d3799dc2b038bb6374eddbb4c9: Status 404 returned error can't find the container with id 60bfa0aa9e1e6f9dcc7c3db5ba302c3860d805d3799dc2b038bb6374eddbb4c9 Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.268084 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.339631 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-operator-scripts\") pod \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\" (UID: \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\") " Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.339732 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qszml\" (UniqueName: \"kubernetes.io/projected/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-kube-api-access-qszml\") pod \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\" (UID: \"7f183dd0-25fa-4b38-b6a0-d7f670aa5433\") " Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.340438 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f183dd0-25fa-4b38-b6a0-d7f670aa5433" (UID: "7f183dd0-25fa-4b38-b6a0-d7f670aa5433"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.349640 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-kube-api-access-qszml" (OuterVolumeSpecName: "kube-api-access-qszml") pod "7f183dd0-25fa-4b38-b6a0-d7f670aa5433" (UID: "7f183dd0-25fa-4b38-b6a0-d7f670aa5433"). InnerVolumeSpecName "kube-api-access-qszml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.442721 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.443072 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qszml\" (UniqueName: \"kubernetes.io/projected/7f183dd0-25fa-4b38-b6a0-d7f670aa5433-kube-api-access-qszml\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.677407 4750 generic.go:334] "Generic (PLEG): container finished" podID="ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" containerID="1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951" exitCode=0 Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.677480 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" event={"ID":"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17","Type":"ContainerDied","Data":"1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951"} Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.677503 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" event={"ID":"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17","Type":"ContainerStarted","Data":"60bfa0aa9e1e6f9dcc7c3db5ba302c3860d805d3799dc2b038bb6374eddbb4c9"} Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.686501 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"79c7852214c0be6349fff32677fad18ecfbe2266752eae1c6bbb0b1e70e29ad2"} Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.693580 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-j4kbh" event={"ID":"7f183dd0-25fa-4b38-b6a0-d7f670aa5433","Type":"ContainerDied","Data":"a423239daf33cf33dc927876ec3d3109fe3f9489b7f1558a4871bea4458e39d8"} Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.693663 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a423239daf33cf33dc927876ec3d3109fe3f9489b7f1558a4871bea4458e39d8" Feb 14 14:14:11 crc kubenswrapper[4750]: I0214 14:14:11.694232 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j4kbh" Feb 14 14:14:12 crc kubenswrapper[4750]: I0214 14:14:12.714079 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" event={"ID":"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17","Type":"ContainerStarted","Data":"858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce"} Feb 14 14:14:12 crc kubenswrapper[4750]: I0214 14:14:12.714563 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:12 crc kubenswrapper[4750]: I0214 14:14:12.716567 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5cf5-account-create-update-s64xm" event={"ID":"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d","Type":"ContainerDied","Data":"268a64d34fe82599a8c22b10e9849656a15d4306c8e553a2643dd1c49692ab0f"} Feb 14 14:14:12 crc kubenswrapper[4750]: I0214 14:14:12.716590 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="268a64d34fe82599a8c22b10e9849656a15d4306c8e553a2643dd1c49692ab0f" Feb 14 14:14:12 crc kubenswrapper[4750]: I0214 14:14:12.734561 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"cc781600024cdccffaf9c0b92d28088443e0b3dc3894c91f45a89ca8e043a430"} Feb 14 14:14:12 crc kubenswrapper[4750]: I0214 14:14:12.734604 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"b191d9fbd25171728d7b2249cf8506161b2233fea2dcef8d724a446d910d1488"} Feb 14 14:14:12 crc kubenswrapper[4750]: I0214 14:14:12.734756 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" podStartSLOduration=3.734737766 podStartE2EDuration="3.734737766s" podCreationTimestamp="2026-02-14 14:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:12.73271614 +0000 UTC m=+1324.758705621" watchObservedRunningTime="2026-02-14 14:14:12.734737766 +0000 UTC m=+1324.760727247" Feb 14 14:14:12 crc kubenswrapper[4750]: I0214 14:14:12.738060 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zz2gv" event={"ID":"188954af-a7be-4dc4-9c84-eb83c5c39d7e","Type":"ContainerDied","Data":"eca266b09e3eb4b64c1072aca005dcdfcdcf7326cf26b86a2abfecebedeeb0df"} Feb 14 14:14:12 crc kubenswrapper[4750]: I0214 14:14:12.738103 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca266b09e3eb4b64c1072aca005dcdfcdcf7326cf26b86a2abfecebedeeb0df" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.199533 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.232353 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.242039 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.256803 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.277968 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.316390 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfgnw\" (UniqueName: \"kubernetes.io/projected/95df37d4-bdf9-49e2-8286-085cd7975f98-kube-api-access-sfgnw\") pod \"95df37d4-bdf9-49e2-8286-085cd7975f98\" (UID: \"95df37d4-bdf9-49e2-8286-085cd7975f98\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.316449 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jjzn\" (UniqueName: \"kubernetes.io/projected/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-kube-api-access-8jjzn\") pod \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\" (UID: \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.316474 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh2pz\" (UniqueName: \"kubernetes.io/projected/49101baa-13ea-46fb-8e04-403d10c5e7ce-kube-api-access-xh2pz\") pod \"49101baa-13ea-46fb-8e04-403d10c5e7ce\" (UID: \"49101baa-13ea-46fb-8e04-403d10c5e7ce\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.316513 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df37d4-bdf9-49e2-8286-085cd7975f98-operator-scripts\") pod \"95df37d4-bdf9-49e2-8286-085cd7975f98\" (UID: \"95df37d4-bdf9-49e2-8286-085cd7975f98\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.316606 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188954af-a7be-4dc4-9c84-eb83c5c39d7e-operator-scripts\") pod \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\" (UID: \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.316651 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-operator-scripts\") pod \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\" (UID: \"0e3b2bdb-77bc-4578-adb8-28b3ee6e911d\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.316697 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49101baa-13ea-46fb-8e04-403d10c5e7ce-operator-scripts\") pod \"49101baa-13ea-46fb-8e04-403d10c5e7ce\" (UID: \"49101baa-13ea-46fb-8e04-403d10c5e7ce\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.316750 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mk7q\" (UniqueName: \"kubernetes.io/projected/188954af-a7be-4dc4-9c84-eb83c5c39d7e-kube-api-access-9mk7q\") pod \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\" (UID: \"188954af-a7be-4dc4-9c84-eb83c5c39d7e\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.317156 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188954af-a7be-4dc4-9c84-eb83c5c39d7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "188954af-a7be-4dc4-9c84-eb83c5c39d7e" (UID: "188954af-a7be-4dc4-9c84-eb83c5c39d7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.317204 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e3b2bdb-77bc-4578-adb8-28b3ee6e911d" (UID: "0e3b2bdb-77bc-4578-adb8-28b3ee6e911d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.317358 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95df37d4-bdf9-49e2-8286-085cd7975f98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95df37d4-bdf9-49e2-8286-085cd7975f98" (UID: "95df37d4-bdf9-49e2-8286-085cd7975f98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.317593 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49101baa-13ea-46fb-8e04-403d10c5e7ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49101baa-13ea-46fb-8e04-403d10c5e7ce" (UID: "49101baa-13ea-46fb-8e04-403d10c5e7ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.317688 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df37d4-bdf9-49e2-8286-085cd7975f98-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.317705 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188954af-a7be-4dc4-9c84-eb83c5c39d7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.317714 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.326910 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49101baa-13ea-46fb-8e04-403d10c5e7ce-kube-api-access-xh2pz" (OuterVolumeSpecName: "kube-api-access-xh2pz") pod "49101baa-13ea-46fb-8e04-403d10c5e7ce" (UID: "49101baa-13ea-46fb-8e04-403d10c5e7ce"). InnerVolumeSpecName "kube-api-access-xh2pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.326966 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188954af-a7be-4dc4-9c84-eb83c5c39d7e-kube-api-access-9mk7q" (OuterVolumeSpecName: "kube-api-access-9mk7q") pod "188954af-a7be-4dc4-9c84-eb83c5c39d7e" (UID: "188954af-a7be-4dc4-9c84-eb83c5c39d7e"). InnerVolumeSpecName "kube-api-access-9mk7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.328811 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-kube-api-access-8jjzn" (OuterVolumeSpecName: "kube-api-access-8jjzn") pod "0e3b2bdb-77bc-4578-adb8-28b3ee6e911d" (UID: "0e3b2bdb-77bc-4578-adb8-28b3ee6e911d"). InnerVolumeSpecName "kube-api-access-8jjzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.336712 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95df37d4-bdf9-49e2-8286-085cd7975f98-kube-api-access-sfgnw" (OuterVolumeSpecName: "kube-api-access-sfgnw") pod "95df37d4-bdf9-49e2-8286-085cd7975f98" (UID: "95df37d4-bdf9-49e2-8286-085cd7975f98"). InnerVolumeSpecName "kube-api-access-sfgnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.419469 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrvlg\" (UniqueName: \"kubernetes.io/projected/c27ae574-7796-42fe-854a-8a82e9936e2d-kube-api-access-mrvlg\") pod \"c27ae574-7796-42fe-854a-8a82e9936e2d\" (UID: \"c27ae574-7796-42fe-854a-8a82e9936e2d\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.419546 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c27ae574-7796-42fe-854a-8a82e9936e2d-operator-scripts\") pod \"c27ae574-7796-42fe-854a-8a82e9936e2d\" (UID: \"c27ae574-7796-42fe-854a-8a82e9936e2d\") " Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.420181 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfgnw\" (UniqueName: \"kubernetes.io/projected/95df37d4-bdf9-49e2-8286-085cd7975f98-kube-api-access-sfgnw\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.420203 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jjzn\" (UniqueName: \"kubernetes.io/projected/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d-kube-api-access-8jjzn\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.420225 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh2pz\" (UniqueName: \"kubernetes.io/projected/49101baa-13ea-46fb-8e04-403d10c5e7ce-kube-api-access-xh2pz\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.420243 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49101baa-13ea-46fb-8e04-403d10c5e7ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.420255 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mk7q\" (UniqueName: \"kubernetes.io/projected/188954af-a7be-4dc4-9c84-eb83c5c39d7e-kube-api-access-9mk7q\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.420657 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c27ae574-7796-42fe-854a-8a82e9936e2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c27ae574-7796-42fe-854a-8a82e9936e2d" (UID: "c27ae574-7796-42fe-854a-8a82e9936e2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.423971 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27ae574-7796-42fe-854a-8a82e9936e2d-kube-api-access-mrvlg" (OuterVolumeSpecName: "kube-api-access-mrvlg") pod "c27ae574-7796-42fe-854a-8a82e9936e2d" (UID: "c27ae574-7796-42fe-854a-8a82e9936e2d"). InnerVolumeSpecName "kube-api-access-mrvlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.522512 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrvlg\" (UniqueName: \"kubernetes.io/projected/c27ae574-7796-42fe-854a-8a82e9936e2d-kube-api-access-mrvlg\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.522547 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c27ae574-7796-42fe-854a-8a82e9936e2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.754943 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7cdc1f12-6f04-4860-9536-32178d28e2b7","Type":"ContainerStarted","Data":"ffe1de78cd5753e8f7bbb5cd894663b8a594d40a2a3f43469add4f3cc7f0a5f6"} Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.757198 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dfac-account-create-update-lxm7n" event={"ID":"49101baa-13ea-46fb-8e04-403d10c5e7ce","Type":"ContainerDied","Data":"c103ee23c892239320b1a0edbed7ca6308fe2a8e23d369afe4194bf3812ae0ca"} Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.757241 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c103ee23c892239320b1a0edbed7ca6308fe2a8e23d369afe4194bf3812ae0ca" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.757305 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dfac-account-create-update-lxm7n" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.770824 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4dmfv" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.770856 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4dmfv" event={"ID":"95df37d4-bdf9-49e2-8286-085cd7975f98","Type":"ContainerDied","Data":"c0182b811c9d65437b36fdf1d7d72bbe391aac708e00ddf192483c6d1205534d"} Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.770902 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0182b811c9d65437b36fdf1d7d72bbe391aac708e00ddf192483c6d1205534d" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.772938 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6534-account-create-update-bsnpb" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.773056 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6534-account-create-update-bsnpb" event={"ID":"c27ae574-7796-42fe-854a-8a82e9936e2d","Type":"ContainerDied","Data":"054426f2eb8d911a80282d27a55598eb0c6954b0df7e144efcb3e08aab7d606c"} Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.773101 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="054426f2eb8d911a80282d27a55598eb0c6954b0df7e144efcb3e08aab7d606c" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.779078 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"f1d3742bf59cfdd2020846a02715a40e149699403d385c519f8c5719bc7b0cf6"} Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.779105 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zz2gv" Feb 14 14:14:13 crc kubenswrapper[4750]: I0214 14:14:13.779173 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5cf5-account-create-update-s64xm" Feb 14 14:14:15 crc kubenswrapper[4750]: I0214 14:14:15.803912 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6232-account-create-update-7gdfp" event={"ID":"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e","Type":"ContainerDied","Data":"bc84795dae0f9a2bac326234325c96dec760ca53dfe99445bd55ff1bef923c57"} Feb 14 14:14:15 crc kubenswrapper[4750]: I0214 14:14:15.805205 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc84795dae0f9a2bac326234325c96dec760ca53dfe99445bd55ff1bef923c57" Feb 14 14:14:15 crc kubenswrapper[4750]: I0214 14:14:15.806607 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hrsll" event={"ID":"409d5a14-70a1-4109-894d-84c2b04e297d","Type":"ContainerDied","Data":"b6b9a9f60b562871f2edae1c69c938955c75bf43f39982319962d5cfa03dc3ae"} Feb 14 14:14:15 crc kubenswrapper[4750]: I0214 14:14:15.806772 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b9a9f60b562871f2edae1c69c938955c75bf43f39982319962d5cfa03dc3ae" Feb 14 14:14:15 crc kubenswrapper[4750]: I0214 14:14:15.954582 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:15 crc kubenswrapper[4750]: I0214 14:14:15.971178 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.079300 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr2rt\" (UniqueName: \"kubernetes.io/projected/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-kube-api-access-vr2rt\") pod \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\" (UID: \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\") " Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.079610 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmkqf\" (UniqueName: \"kubernetes.io/projected/409d5a14-70a1-4109-894d-84c2b04e297d-kube-api-access-dmkqf\") pod \"409d5a14-70a1-4109-894d-84c2b04e297d\" (UID: \"409d5a14-70a1-4109-894d-84c2b04e297d\") " Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.079726 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409d5a14-70a1-4109-894d-84c2b04e297d-operator-scripts\") pod \"409d5a14-70a1-4109-894d-84c2b04e297d\" (UID: \"409d5a14-70a1-4109-894d-84c2b04e297d\") " Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.079873 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-operator-scripts\") pod \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\" (UID: \"6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e\") " Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.083330 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e" (UID: "6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.083747 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409d5a14-70a1-4109-894d-84c2b04e297d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "409d5a14-70a1-4109-894d-84c2b04e297d" (UID: "409d5a14-70a1-4109-894d-84c2b04e297d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.089313 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-kube-api-access-vr2rt" (OuterVolumeSpecName: "kube-api-access-vr2rt") pod "6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e" (UID: "6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e"). InnerVolumeSpecName "kube-api-access-vr2rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.089472 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409d5a14-70a1-4109-894d-84c2b04e297d-kube-api-access-dmkqf" (OuterVolumeSpecName: "kube-api-access-dmkqf") pod "409d5a14-70a1-4109-894d-84c2b04e297d" (UID: "409d5a14-70a1-4109-894d-84c2b04e297d"). InnerVolumeSpecName "kube-api-access-dmkqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.182816 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.182854 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr2rt\" (UniqueName: \"kubernetes.io/projected/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e-kube-api-access-vr2rt\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.182868 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmkqf\" (UniqueName: \"kubernetes.io/projected/409d5a14-70a1-4109-894d-84c2b04e297d-kube-api-access-dmkqf\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.182880 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409d5a14-70a1-4109-894d-84c2b04e297d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.822737 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"36c2657aeab45ce5231c61af7ee93159b2feaa967d3bd13486b79f3692863f4d"} Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.823077 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"263a2beca6819ba6316a2f8fe42c14058bbd452cd861b42eda17be214c93f98f"} Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.823095 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e623022c-0cda-4463-b5e1-3157a1f8c1c1","Type":"ContainerStarted","Data":"daf5e13dc1374dce6544b52f9f16adec7c0a2e3b554188631b7a9149b22c7118"} Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.827529 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7cdc1f12-6f04-4860-9536-32178d28e2b7","Type":"ContainerStarted","Data":"35684d0d2a7ebc3a49bffb474147a520f8c6b53cce30d2e06977ecaf0ab2c3cc"} Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.829364 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b75n6" event={"ID":"5f9044d3-72ef-43b1-b353-a02d6a3538b7","Type":"ContainerStarted","Data":"6420d01200b8259f0f9877d325bb290d3e674fa5925fd302cabeb88e04a39a8d"} Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.829381 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6232-account-create-update-7gdfp" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.829503 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hrsll" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.874842 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.675008391 podStartE2EDuration="46.874813701s" podCreationTimestamp="2026-02-14 14:13:30 +0000 UTC" firstStartedPulling="2026-02-14 14:14:03.905894829 +0000 UTC m=+1315.931884310" lastFinishedPulling="2026-02-14 14:14:11.105700139 +0000 UTC m=+1323.131689620" observedRunningTime="2026-02-14 14:14:16.857678443 +0000 UTC m=+1328.883667924" watchObservedRunningTime="2026-02-14 14:14:16.874813701 +0000 UTC m=+1328.900803182" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.915061 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.915029269 podStartE2EDuration="18.915029269s" podCreationTimestamp="2026-02-14 14:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:16.902790035 +0000 UTC m=+1328.928779526" watchObservedRunningTime="2026-02-14 14:14:16.915029269 +0000 UTC m=+1328.941018750" Feb 14 14:14:16 crc kubenswrapper[4750]: I0214 14:14:16.933251 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-b75n6" podStartSLOduration=3.400522927 podStartE2EDuration="9.933232256s" podCreationTimestamp="2026-02-14 14:14:07 +0000 UTC" firstStartedPulling="2026-02-14 14:14:09.245339004 +0000 UTC m=+1321.271328485" lastFinishedPulling="2026-02-14 14:14:15.778048323 +0000 UTC m=+1327.804037814" observedRunningTime="2026-02-14 14:14:16.925476425 +0000 UTC m=+1328.951465906" watchObservedRunningTime="2026-02-14 14:14:16.933232256 +0000 UTC m=+1328.959221737" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.143318 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-7t9pn"] Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.144070 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" podUID="ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" containerName="dnsmasq-dns" containerID="cri-o://858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce" gracePeriod=10 Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.149290 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.189729 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8cwt7"] Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.190164 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f183dd0-25fa-4b38-b6a0-d7f670aa5433" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190177 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f183dd0-25fa-4b38-b6a0-d7f670aa5433" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.190215 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188954af-a7be-4dc4-9c84-eb83c5c39d7e" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190222 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="188954af-a7be-4dc4-9c84-eb83c5c39d7e" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.190247 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27ae574-7796-42fe-854a-8a82e9936e2d" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190254 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27ae574-7796-42fe-854a-8a82e9936e2d" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.190263 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409d5a14-70a1-4109-894d-84c2b04e297d" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190268 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="409d5a14-70a1-4109-894d-84c2b04e297d" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.190276 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49101baa-13ea-46fb-8e04-403d10c5e7ce" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190281 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="49101baa-13ea-46fb-8e04-403d10c5e7ce" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.190313 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190319 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.190328 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3b2bdb-77bc-4578-adb8-28b3ee6e911d" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190334 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3b2bdb-77bc-4578-adb8-28b3ee6e911d" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.190346 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95df37d4-bdf9-49e2-8286-085cd7975f98" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190354 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="95df37d4-bdf9-49e2-8286-085cd7975f98" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190546 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="409d5a14-70a1-4109-894d-84c2b04e297d" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190559 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3b2bdb-77bc-4578-adb8-28b3ee6e911d" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190565 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="188954af-a7be-4dc4-9c84-eb83c5c39d7e" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190583 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27ae574-7796-42fe-854a-8a82e9936e2d" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190595 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="95df37d4-bdf9-49e2-8286-085cd7975f98" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190607 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f183dd0-25fa-4b38-b6a0-d7f670aa5433" containerName="mariadb-database-create" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190620 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.190628 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="49101baa-13ea-46fb-8e04-403d10c5e7ce" containerName="mariadb-account-create-update" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.192154 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.195302 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.245315 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8cwt7"] Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.307177 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.307436 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.307472 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.307518 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-config\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.307579 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjg4n\" (UniqueName: \"kubernetes.io/projected/81a4f540-742b-4f92-9839-b2e5fc1aacfa-kube-api-access-jjg4n\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.307596 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.409733 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjg4n\" (UniqueName: \"kubernetes.io/projected/81a4f540-742b-4f92-9839-b2e5fc1aacfa-kube-api-access-jjg4n\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.409791 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.409905 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.409936 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.409968 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.409986 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-config\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.411040 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-config\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.426245 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.426875 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.427050 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.427520 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.435149 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjg4n\" (UniqueName: \"kubernetes.io/projected/81a4f540-742b-4f92-9839-b2e5fc1aacfa-kube-api-access-jjg4n\") pod \"dnsmasq-dns-74f6bcbc87-8cwt7\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.549975 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.687027 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.721422 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-nb\") pod \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.721568 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-sb\") pod \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.721637 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-config\") pod \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.721717 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2lpc\" (UniqueName: \"kubernetes.io/projected/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-kube-api-access-m2lpc\") pod \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.721922 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-dns-svc\") pod \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.739533 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-kube-api-access-m2lpc" (OuterVolumeSpecName: "kube-api-access-m2lpc") pod "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" (UID: "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17"). InnerVolumeSpecName "kube-api-access-m2lpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.809490 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" (UID: "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.827873 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" (UID: "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.828398 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-sb\") pod \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\" (UID: \"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17\") " Feb 14 14:14:17 crc kubenswrapper[4750]: W0214 14:14:17.829535 4750 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.829550 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" (UID: "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.830046 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.830062 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2lpc\" (UniqueName: \"kubernetes.io/projected/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-kube-api-access-m2lpc\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.830221 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.857794 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-config" (OuterVolumeSpecName: "config") pod "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" (UID: "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.869725 4750 generic.go:334] "Generic (PLEG): container finished" podID="ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" containerID="858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce" exitCode=0 Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.869781 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.869835 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" event={"ID":"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17","Type":"ContainerDied","Data":"858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce"} Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.869875 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-7t9pn" event={"ID":"ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17","Type":"ContainerDied","Data":"60bfa0aa9e1e6f9dcc7c3db5ba302c3860d805d3799dc2b038bb6374eddbb4c9"} Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.869900 4750 scope.go:117] "RemoveContainer" containerID="858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.870603 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" (UID: "ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.934376 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.934401 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.947840 4750 scope.go:117] "RemoveContainer" containerID="1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.966525 4750 scope.go:117] "RemoveContainer" containerID="858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce" Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.966914 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce\": container with ID starting with 858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce not found: ID does not exist" containerID="858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.966943 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce"} err="failed to get container status \"858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce\": rpc error: code = NotFound desc = could not find container \"858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce\": container with ID starting with 858491b0cb8de1f31b54ccd1e78b74c5d21a820ad92a7e2b7de782b834cf01ce not found: ID does not exist" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.966968 4750 scope.go:117] "RemoveContainer" containerID="1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951" Feb 14 14:14:17 crc kubenswrapper[4750]: E0214 14:14:17.967232 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951\": container with ID starting with 1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951 not found: ID does not exist" containerID="1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951" Feb 14 14:14:17 crc kubenswrapper[4750]: I0214 14:14:17.967251 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951"} err="failed to get container status \"1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951\": rpc error: code = NotFound desc = could not find container \"1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951\": container with ID starting with 1ccf6e17fc46c73609e25daac16610d6fb2cb81659a0c453593ba4bc9000d951 not found: ID does not exist" Feb 14 14:14:18 crc kubenswrapper[4750]: I0214 14:14:18.077656 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8cwt7"] Feb 14 14:14:18 crc kubenswrapper[4750]: W0214 14:14:18.078075 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81a4f540_742b_4f92_9839_b2e5fc1aacfa.slice/crio-0d68787bfc6d13b65e1d18b416668bf14c5a21e766e23034d4de4c69ce075205 WatchSource:0}: Error finding container 0d68787bfc6d13b65e1d18b416668bf14c5a21e766e23034d4de4c69ce075205: Status 404 returned error can't find the container with id 0d68787bfc6d13b65e1d18b416668bf14c5a21e766e23034d4de4c69ce075205 Feb 14 14:14:18 crc kubenswrapper[4750]: I0214 14:14:18.218936 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-7t9pn"] Feb 14 14:14:18 crc kubenswrapper[4750]: I0214 14:14:18.233634 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-7t9pn"] Feb 14 14:14:18 crc kubenswrapper[4750]: I0214 14:14:18.755868 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" path="/var/lib/kubelet/pods/ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17/volumes" Feb 14 14:14:18 crc kubenswrapper[4750]: I0214 14:14:18.882178 4750 generic.go:334] "Generic (PLEG): container finished" podID="81a4f540-742b-4f92-9839-b2e5fc1aacfa" containerID="c5a4a6fcac22a0c240b90d05ad9e4d44b11b595ea24f898a138483388f87fb2c" exitCode=0 Feb 14 14:14:18 crc kubenswrapper[4750]: I0214 14:14:18.882288 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" event={"ID":"81a4f540-742b-4f92-9839-b2e5fc1aacfa","Type":"ContainerDied","Data":"c5a4a6fcac22a0c240b90d05ad9e4d44b11b595ea24f898a138483388f87fb2c"} Feb 14 14:14:18 crc kubenswrapper[4750]: I0214 14:14:18.882368 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" event={"ID":"81a4f540-742b-4f92-9839-b2e5fc1aacfa","Type":"ContainerStarted","Data":"0d68787bfc6d13b65e1d18b416668bf14c5a21e766e23034d4de4c69ce075205"} Feb 14 14:14:19 crc kubenswrapper[4750]: I0214 14:14:19.543515 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 14:14:19 crc kubenswrapper[4750]: I0214 14:14:19.900108 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" event={"ID":"81a4f540-742b-4f92-9839-b2e5fc1aacfa","Type":"ContainerStarted","Data":"e05f3595e6db162f4b3e8c0fd258c666d48be9f063b7be8053fdeda2c4b7bb70"} Feb 14 14:14:19 crc kubenswrapper[4750]: I0214 14:14:19.900447 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:20 crc kubenswrapper[4750]: I0214 14:14:20.914133 4750 generic.go:334] "Generic (PLEG): container finished" podID="5f9044d3-72ef-43b1-b353-a02d6a3538b7" containerID="6420d01200b8259f0f9877d325bb290d3e674fa5925fd302cabeb88e04a39a8d" exitCode=0 Feb 14 14:14:20 crc kubenswrapper[4750]: I0214 14:14:20.914242 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b75n6" event={"ID":"5f9044d3-72ef-43b1-b353-a02d6a3538b7","Type":"ContainerDied","Data":"6420d01200b8259f0f9877d325bb290d3e674fa5925fd302cabeb88e04a39a8d"} Feb 14 14:14:20 crc kubenswrapper[4750]: I0214 14:14:20.940874 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" podStartSLOduration=3.940849283 podStartE2EDuration="3.940849283s" podCreationTimestamp="2026-02-14 14:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:19.931039201 +0000 UTC m=+1331.957028702" watchObservedRunningTime="2026-02-14 14:14:20.940849283 +0000 UTC m=+1332.966838784" Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.382361 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.432452 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-combined-ca-bundle\") pod \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.432496 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-config-data\") pod \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.432605 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vv5g\" (UniqueName: \"kubernetes.io/projected/5f9044d3-72ef-43b1-b353-a02d6a3538b7-kube-api-access-4vv5g\") pod \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\" (UID: \"5f9044d3-72ef-43b1-b353-a02d6a3538b7\") " Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.456864 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9044d3-72ef-43b1-b353-a02d6a3538b7-kube-api-access-4vv5g" (OuterVolumeSpecName: "kube-api-access-4vv5g") pod "5f9044d3-72ef-43b1-b353-a02d6a3538b7" (UID: "5f9044d3-72ef-43b1-b353-a02d6a3538b7"). InnerVolumeSpecName "kube-api-access-4vv5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.466589 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f9044d3-72ef-43b1-b353-a02d6a3538b7" (UID: "5f9044d3-72ef-43b1-b353-a02d6a3538b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.513418 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-config-data" (OuterVolumeSpecName: "config-data") pod "5f9044d3-72ef-43b1-b353-a02d6a3538b7" (UID: "5f9044d3-72ef-43b1-b353-a02d6a3538b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.534506 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.534536 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9044d3-72ef-43b1-b353-a02d6a3538b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.534550 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vv5g\" (UniqueName: \"kubernetes.io/projected/5f9044d3-72ef-43b1-b353-a02d6a3538b7-kube-api-access-4vv5g\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.939021 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b75n6" event={"ID":"5f9044d3-72ef-43b1-b353-a02d6a3538b7","Type":"ContainerDied","Data":"057e68ba9f5054d250523fd49f02f3071a8ba3dd233513a3c621b8e5a9469d5a"} Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.939069 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057e68ba9f5054d250523fd49f02f3071a8ba3dd233513a3c621b8e5a9469d5a" Feb 14 14:14:22 crc kubenswrapper[4750]: I0214 14:14:22.939213 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b75n6" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.160872 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8cwt7"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.161446 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" podUID="81a4f540-742b-4f92-9839-b2e5fc1aacfa" containerName="dnsmasq-dns" containerID="cri-o://e05f3595e6db162f4b3e8c0fd258c666d48be9f063b7be8053fdeda2c4b7bb70" gracePeriod=10 Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.206741 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dxfvz"] Feb 14 14:14:23 crc kubenswrapper[4750]: E0214 14:14:23.207391 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9044d3-72ef-43b1-b353-a02d6a3538b7" containerName="keystone-db-sync" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.207409 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9044d3-72ef-43b1-b353-a02d6a3538b7" containerName="keystone-db-sync" Feb 14 14:14:23 crc kubenswrapper[4750]: E0214 14:14:23.207423 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" containerName="dnsmasq-dns" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.207428 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" containerName="dnsmasq-dns" Feb 14 14:14:23 crc kubenswrapper[4750]: E0214 14:14:23.207449 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" containerName="init" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.207455 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" containerName="init" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.207657 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3e5ca7-b31f-4911-92d8-a9cbb98a1f17" containerName="dnsmasq-dns" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.207680 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9044d3-72ef-43b1-b353-a02d6a3538b7" containerName="keystone-db-sync" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.208719 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.220418 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zt5zh"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.222229 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.224275 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.225867 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.226086 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vtfpt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.228811 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.234568 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.239615 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dxfvz"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258003 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258060 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-credential-keys\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258102 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-svc\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258157 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-config\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258190 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-config-data\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258243 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-combined-ca-bundle\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258284 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258323 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qm9j\" (UniqueName: \"kubernetes.io/projected/eb7b571e-71f9-418e-a695-14df6e3956ab-kube-api-access-5qm9j\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258343 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-fernet-keys\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258368 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6fw7\" (UniqueName: \"kubernetes.io/projected/3b504616-21a1-400e-a1c0-152ea8dbcfb5-kube-api-access-f6fw7\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258416 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-scripts\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.258438 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.312308 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-gdzkt"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.316852 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.320736 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.320952 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-dc62p" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.341925 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zt5zh"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.360895 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.360946 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-credential-keys\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.360985 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-svc\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361018 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-config\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361056 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-config-data\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361151 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-combined-ca-bundle\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361181 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btvcd\" (UniqueName: \"kubernetes.io/projected/2b46a12b-a34f-4850-a1b8-a764ba798764-kube-api-access-btvcd\") pod \"heat-db-sync-gdzkt\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361224 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361269 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qm9j\" (UniqueName: \"kubernetes.io/projected/eb7b571e-71f9-418e-a695-14df6e3956ab-kube-api-access-5qm9j\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361301 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-fernet-keys\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361328 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6fw7\" (UniqueName: \"kubernetes.io/projected/3b504616-21a1-400e-a1c0-152ea8dbcfb5-kube-api-access-f6fw7\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361393 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-scripts\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361418 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361448 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-combined-ca-bundle\") pod \"heat-db-sync-gdzkt\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361478 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-config-data\") pod \"heat-db-sync-gdzkt\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.361931 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.362497 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.379606 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-svc\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.380644 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-credential-keys\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.381253 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.385448 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-config\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.385562 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-combined-ca-bundle\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.388861 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-config-data\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.394736 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-fernet-keys\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.395436 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-scripts\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.416444 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gdzkt"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.435865 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qm9j\" (UniqueName: \"kubernetes.io/projected/eb7b571e-71f9-418e-a695-14df6e3956ab-kube-api-access-5qm9j\") pod \"dnsmasq-dns-847c4cc679-dxfvz\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.462709 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6fw7\" (UniqueName: \"kubernetes.io/projected/3b504616-21a1-400e-a1c0-152ea8dbcfb5-kube-api-access-f6fw7\") pod \"keystone-bootstrap-zt5zh\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.464005 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-combined-ca-bundle\") pod \"heat-db-sync-gdzkt\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.464053 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-config-data\") pod \"heat-db-sync-gdzkt\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.464188 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btvcd\" (UniqueName: \"kubernetes.io/projected/2b46a12b-a34f-4850-a1b8-a764ba798764-kube-api-access-btvcd\") pod \"heat-db-sync-gdzkt\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.470145 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-combined-ca-bundle\") pod \"heat-db-sync-gdzkt\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.482035 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-config-data\") pod \"heat-db-sync-gdzkt\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.535315 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btvcd\" (UniqueName: \"kubernetes.io/projected/2b46a12b-a34f-4850-a1b8-a764ba798764-kube-api-access-btvcd\") pod \"heat-db-sync-gdzkt\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.543764 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.551614 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6b4rh"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.552946 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.557945 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-flpd6" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.558487 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.569777 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.570198 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gdzkt" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.583261 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rrlxq"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.584618 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.588400 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.591855 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wqpzp" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.596052 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.596434 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rrlxq"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.610991 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6b4rh"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.652721 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dxfvz"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.668223 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-nksfx"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.669699 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.673873 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-combined-ca-bundle\") pod \"barbican-db-sync-6b4rh\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.673920 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq7m9\" (UniqueName: \"kubernetes.io/projected/3bbc77b3-5108-40f8-b057-18e305e8f8ba-kube-api-access-jq7m9\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.674000 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-db-sync-config-data\") pod \"barbican-db-sync-6b4rh\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.674037 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbc77b3-5108-40f8-b057-18e305e8f8ba-logs\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.674063 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-combined-ca-bundle\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.674082 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtll7\" (UniqueName: \"kubernetes.io/projected/1d67ec74-2a65-494e-a768-5cc6e6714e49-kube-api-access-gtll7\") pod \"barbican-db-sync-6b4rh\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.674135 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-scripts\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.674155 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-config-data\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.675864 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.676477 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zq6lj" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.697727 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.723298 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-n6jc5"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.724871 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.739207 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-thwkx" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.740369 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.755762 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nksfx"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.760219 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.774985 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n6jc5"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.778511 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-scripts\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.778696 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-combined-ca-bundle\") pod \"neutron-db-sync-nksfx\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.778819 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-config-data\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.778933 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-combined-ca-bundle\") pod \"barbican-db-sync-6b4rh\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.779037 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq7m9\" (UniqueName: \"kubernetes.io/projected/3bbc77b3-5108-40f8-b057-18e305e8f8ba-kube-api-access-jq7m9\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.779211 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-config\") pod \"neutron-db-sync-nksfx\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.779357 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-db-sync-config-data\") pod \"barbican-db-sync-6b4rh\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.779592 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-combined-ca-bundle\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.779737 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbc77b3-5108-40f8-b057-18e305e8f8ba-logs\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.779846 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rrk\" (UniqueName: \"kubernetes.io/projected/8f1af0ef-9644-45bb-9c3f-c05350b180e8-kube-api-access-24rrk\") pod \"neutron-db-sync-nksfx\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.779948 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-db-sync-config-data\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.780056 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-combined-ca-bundle\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.780196 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtll7\" (UniqueName: \"kubernetes.io/projected/1d67ec74-2a65-494e-a768-5cc6e6714e49-kube-api-access-gtll7\") pod \"barbican-db-sync-6b4rh\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.780316 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-etc-machine-id\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.780440 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-scripts\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.780538 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-config-data\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.780688 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xql4m\" (UniqueName: \"kubernetes.io/projected/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-kube-api-access-xql4m\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.782834 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbc77b3-5108-40f8-b057-18e305e8f8ba-logs\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.786037 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-combined-ca-bundle\") pod \"barbican-db-sync-6b4rh\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.787920 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-w759g"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.790486 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.798876 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-config-data\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.804989 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-db-sync-config-data\") pod \"barbican-db-sync-6b4rh\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.805100 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-combined-ca-bundle\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.805551 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-scripts\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.833251 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-w759g"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.843019 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtll7\" (UniqueName: \"kubernetes.io/projected/1d67ec74-2a65-494e-a768-5cc6e6714e49-kube-api-access-gtll7\") pod \"barbican-db-sync-6b4rh\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.880777 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq7m9\" (UniqueName: \"kubernetes.io/projected/3bbc77b3-5108-40f8-b057-18e305e8f8ba-kube-api-access-jq7m9\") pod \"placement-db-sync-rrlxq\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897371 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-combined-ca-bundle\") pod \"neutron-db-sync-nksfx\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897458 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-config-data\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897513 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897545 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-config\") pod \"neutron-db-sync-nksfx\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897570 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-combined-ca-bundle\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897590 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897608 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqps9\" (UniqueName: \"kubernetes.io/projected/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-kube-api-access-xqps9\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897627 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-config\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897665 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24rrk\" (UniqueName: \"kubernetes.io/projected/8f1af0ef-9644-45bb-9c3f-c05350b180e8-kube-api-access-24rrk\") pod \"neutron-db-sync-nksfx\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897685 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-db-sync-config-data\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897721 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-etc-machine-id\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897768 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xql4m\" (UniqueName: \"kubernetes.io/projected/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-kube-api-access-xql4m\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897802 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-scripts\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897823 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.897848 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.910167 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.912259 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-etc-machine-id\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.913838 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-combined-ca-bundle\") pod \"neutron-db-sync-nksfx\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.917017 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-config-data\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.917741 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.927195 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.941863 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.945347 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.945504 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.950607 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-scripts\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.962606 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rrlxq" Feb 14 14:14:23 crc kubenswrapper[4750]: I0214 14:14:23.993864 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24rrk\" (UniqueName: \"kubernetes.io/projected/8f1af0ef-9644-45bb-9c3f-c05350b180e8-kube-api-access-24rrk\") pod \"neutron-db-sync-nksfx\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.000691 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-log-httpd\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.001070 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.001198 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.001319 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-run-httpd\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.001447 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.001536 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-scripts\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.001654 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6dlw\" (UniqueName: \"kubernetes.io/projected/af719ce6-033d-4e45-8502-cc8ee8f091c0-kube-api-access-p6dlw\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.001744 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-combined-ca-bundle\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.002075 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.002848 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.005184 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.006150 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.006883 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.007457 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqps9\" (UniqueName: \"kubernetes.io/projected/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-kube-api-access-xqps9\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.007771 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-config\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.008329 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-config\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.008407 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.008429 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.008462 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-config-data\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.011797 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-config\") pod \"neutron-db-sync-nksfx\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.016360 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xql4m\" (UniqueName: \"kubernetes.io/projected/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-kube-api-access-xql4m\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.053178 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-db-sync-config-data\") pod \"cinder-db-sync-n6jc5\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.076298 4750 generic.go:334] "Generic (PLEG): container finished" podID="81a4f540-742b-4f92-9839-b2e5fc1aacfa" containerID="e05f3595e6db162f4b3e8c0fd258c666d48be9f063b7be8053fdeda2c4b7bb70" exitCode=0 Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.076347 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" event={"ID":"81a4f540-742b-4f92-9839-b2e5fc1aacfa","Type":"ContainerDied","Data":"e05f3595e6db162f4b3e8c0fd258c666d48be9f063b7be8053fdeda2c4b7bb70"} Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.082242 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqps9\" (UniqueName: \"kubernetes.io/projected/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-kube-api-access-xqps9\") pod \"dnsmasq-dns-785d8bcb8c-w759g\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.094932 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.119254 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-run-httpd\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.119362 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-scripts\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.119402 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dlw\" (UniqueName: \"kubernetes.io/projected/af719ce6-033d-4e45-8502-cc8ee8f091c0-kube-api-access-p6dlw\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.119515 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.119543 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.119571 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-config-data\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.119648 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-log-httpd\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.120713 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-log-httpd\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.135240 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-run-httpd\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.135842 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.137012 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.143542 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-config-data\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.148447 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-scripts\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.151074 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.157280 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dlw\" (UniqueName: \"kubernetes.io/projected/af719ce6-033d-4e45-8502-cc8ee8f091c0-kube-api-access-p6dlw\") pod \"ceilometer-0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.178688 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.223307 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.310605 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.324995 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-config\") pod \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.325059 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-nb\") pod \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.336815 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-sb\") pod \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.336949 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-swift-storage-0\") pod \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.337076 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjg4n\" (UniqueName: \"kubernetes.io/projected/81a4f540-742b-4f92-9839-b2e5fc1aacfa-kube-api-access-jjg4n\") pod \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.337121 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-svc\") pod \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\" (UID: \"81a4f540-742b-4f92-9839-b2e5fc1aacfa\") " Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.349984 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a4f540-742b-4f92-9839-b2e5fc1aacfa-kube-api-access-jjg4n" (OuterVolumeSpecName: "kube-api-access-jjg4n") pod "81a4f540-742b-4f92-9839-b2e5fc1aacfa" (UID: "81a4f540-742b-4f92-9839-b2e5fc1aacfa"). InnerVolumeSpecName "kube-api-access-jjg4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.394521 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:14:24 crc kubenswrapper[4750]: E0214 14:14:24.395057 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a4f540-742b-4f92-9839-b2e5fc1aacfa" containerName="dnsmasq-dns" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.395075 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a4f540-742b-4f92-9839-b2e5fc1aacfa" containerName="dnsmasq-dns" Feb 14 14:14:24 crc kubenswrapper[4750]: E0214 14:14:24.395105 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a4f540-742b-4f92-9839-b2e5fc1aacfa" containerName="init" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.395137 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a4f540-742b-4f92-9839-b2e5fc1aacfa" containerName="init" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.397285 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a4f540-742b-4f92-9839-b2e5fc1aacfa" containerName="dnsmasq-dns" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.398763 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.406832 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.407043 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.410897 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-66gxs" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.411071 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.440420 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-logs\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.440477 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.440517 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4q6\" (UniqueName: \"kubernetes.io/projected/343a15a9-528b-4415-81e1-ab1864b53351-kube-api-access-cm4q6\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.440544 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-config-data\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.440603 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.440635 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-scripts\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.440683 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.440733 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.440808 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjg4n\" (UniqueName: \"kubernetes.io/projected/81a4f540-742b-4f92-9839-b2e5fc1aacfa-kube-api-access-jjg4n\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.458182 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.522853 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-config" (OuterVolumeSpecName: "config") pod "81a4f540-742b-4f92-9839-b2e5fc1aacfa" (UID: "81a4f540-742b-4f92-9839-b2e5fc1aacfa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.529045 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81a4f540-742b-4f92-9839-b2e5fc1aacfa" (UID: "81a4f540-742b-4f92-9839-b2e5fc1aacfa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.530226 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81a4f540-742b-4f92-9839-b2e5fc1aacfa" (UID: "81a4f540-742b-4f92-9839-b2e5fc1aacfa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.543357 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.543762 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4q6\" (UniqueName: \"kubernetes.io/projected/343a15a9-528b-4415-81e1-ab1864b53351-kube-api-access-cm4q6\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.543863 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-config-data\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.543985 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.544070 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-scripts\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.544207 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.544318 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.544416 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-logs\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.544531 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.544591 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.544678 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.546596 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81a4f540-742b-4f92-9839-b2e5fc1aacfa" (UID: "81a4f540-742b-4f92-9839-b2e5fc1aacfa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.547730 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.556013 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-logs\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.557212 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.557324 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4116c04a4908c386919762e10ff51e59035632215c786316944928532f707927/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.559876 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.567080 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-config-data\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.582329 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-scripts\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.594018 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.599014 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4q6\" (UniqueName: \"kubernetes.io/projected/343a15a9-528b-4415-81e1-ab1864b53351-kube-api-access-cm4q6\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.623193 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.631055 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.635092 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.635463 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.648747 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.685628 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.712241 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.750721 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.753525 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-logs\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.753670 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmm87\" (UniqueName: \"kubernetes.io/projected/9fab76e3-df30-4482-b841-17005684be5f-kube-api-access-wmm87\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.753739 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.756194 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-scripts\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.756307 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.756453 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.756566 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-config-data\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.858796 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.858887 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-config-data\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.858928 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.858981 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-logs\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.859052 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmm87\" (UniqueName: \"kubernetes.io/projected/9fab76e3-df30-4482-b841-17005684be5f-kube-api-access-wmm87\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.859091 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.859170 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-scripts\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.859213 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.875218 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.875265 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/01be76f23f3db65527afd99bd3e4627b46df2a8504625bc2942280577a76cd42/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.950049 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-logs\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.950175 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.950552 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81a4f540-742b-4f92-9839-b2e5fc1aacfa" (UID: "81a4f540-742b-4f92-9839-b2e5fc1aacfa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.953283 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-config-data\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.962158 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a4f540-742b-4f92-9839-b2e5fc1aacfa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.964349 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.964764 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-scripts\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.968064 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.971595 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:24 crc kubenswrapper[4750]: I0214 14:14:24.972718 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmm87\" (UniqueName: \"kubernetes.io/projected/9fab76e3-df30-4482-b841-17005684be5f-kube-api-access-wmm87\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.001066 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dxfvz"] Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.028051 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.036723 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zt5zh"] Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.083222 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gdzkt"] Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.094490 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" event={"ID":"81a4f540-742b-4f92-9839-b2e5fc1aacfa","Type":"ContainerDied","Data":"0d68787bfc6d13b65e1d18b416668bf14c5a21e766e23034d4de4c69ce075205"} Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.094547 4750 scope.go:117] "RemoveContainer" containerID="e05f3595e6db162f4b3e8c0fd258c666d48be9f063b7be8053fdeda2c4b7bb70" Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.094673 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8cwt7" Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.101463 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zt5zh" event={"ID":"3b504616-21a1-400e-a1c0-152ea8dbcfb5","Type":"ContainerStarted","Data":"ac96f7b8906e7eed72cd6993ec5fa87adefb6f3ba361fb1bc03a873efb2278ef"} Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.102992 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" event={"ID":"eb7b571e-71f9-418e-a695-14df6e3956ab","Type":"ContainerStarted","Data":"4bccc0b37dea29e8ae6838e8d31e6ba1d8faadd2ff14bfe362aedcf10eb33750"} Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.161398 4750 scope.go:117] "RemoveContainer" containerID="c5a4a6fcac22a0c240b90d05ad9e4d44b11b595ea24f898a138483388f87fb2c" Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.172224 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8cwt7"] Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.184242 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8cwt7"] Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.298555 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.421635 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rrlxq"] Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.447660 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.458037 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6b4rh"] Feb 14 14:14:25 crc kubenswrapper[4750]: W0214 14:14:25.460931 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf719ce6_033d_4e45_8502_cc8ee8f091c0.slice/crio-149ccb773971c1a0d06fde0036bec1d3d28c2c5b6bb1ed637570608130b133d7 WatchSource:0}: Error finding container 149ccb773971c1a0d06fde0036bec1d3d28c2c5b6bb1ed637570608130b133d7: Status 404 returned error can't find the container with id 149ccb773971c1a0d06fde0036bec1d3d28c2c5b6bb1ed637570608130b133d7 Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.470963 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n6jc5"] Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.589517 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-w759g"] Feb 14 14:14:25 crc kubenswrapper[4750]: I0214 14:14:25.633719 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nksfx"] Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.080589 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.131729 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rrlxq" event={"ID":"3bbc77b3-5108-40f8-b057-18e305e8f8ba","Type":"ContainerStarted","Data":"3e140fb5ac35c5abaf9cabfd3d06bb855d9c8d87948342ee571efeb77fac0e39"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.139036 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nksfx" event={"ID":"8f1af0ef-9644-45bb-9c3f-c05350b180e8","Type":"ContainerStarted","Data":"8bc668725fd238ae39dd420ee5cc6f70476f2ea74225ef1662373364142c1a09"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.139082 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nksfx" event={"ID":"8f1af0ef-9644-45bb-9c3f-c05350b180e8","Type":"ContainerStarted","Data":"053993846c86f9d71b5106f655c9b61c2b62199e941c3cc469be7de3ee170d3b"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.149206 4750 generic.go:334] "Generic (PLEG): container finished" podID="eb7b571e-71f9-418e-a695-14df6e3956ab" containerID="7a7a048a074c216aae78a64fd98a158dcdb0467491c8f45ead118ed208694e34" exitCode=0 Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.149430 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" event={"ID":"eb7b571e-71f9-418e-a695-14df6e3956ab","Type":"ContainerDied","Data":"7a7a048a074c216aae78a64fd98a158dcdb0467491c8f45ead118ed208694e34"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.163149 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gdzkt" event={"ID":"2b46a12b-a34f-4850-a1b8-a764ba798764","Type":"ContainerStarted","Data":"040613a512a4e850ce0ee9bd84484cdcfb9de8110bff6a92de8f60c887d9efdd"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.166511 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-nksfx" podStartSLOduration=3.166499831 podStartE2EDuration="3.166499831s" podCreationTimestamp="2026-02-14 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:26.162502202 +0000 UTC m=+1338.188491683" watchObservedRunningTime="2026-02-14 14:14:26.166499831 +0000 UTC m=+1338.192489312" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.173325 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n6jc5" event={"ID":"6c96aa35-ddbc-4485-ac13-a2f08de1dd28","Type":"ContainerStarted","Data":"7af9d38679fdcecfac2c5eb66bb11456b3c39f18157df642670ccfb33a1b955f"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.214954 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af719ce6-033d-4e45-8502-cc8ee8f091c0","Type":"ContainerStarted","Data":"149ccb773971c1a0d06fde0036bec1d3d28c2c5b6bb1ed637570608130b133d7"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.239011 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6b4rh" event={"ID":"1d67ec74-2a65-494e-a768-5cc6e6714e49","Type":"ContainerStarted","Data":"e5d8c389a111e4edd18166f3a86c851fffbfcdf5d8323dd8ebda278fb8edff18"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.290286 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zt5zh" event={"ID":"3b504616-21a1-400e-a1c0-152ea8dbcfb5","Type":"ContainerStarted","Data":"ecdea597626f4832e073c8620cb6f7bbe9818e1e86a740215bf00876be871a7e"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.307352 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"343a15a9-528b-4415-81e1-ab1864b53351","Type":"ContainerStarted","Data":"3e60bed3b34b96f3ccc6bfedb8a542ff5c40ff69c2bbea53ec5a328c55dc0162"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.364130 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zt5zh" podStartSLOduration=3.364091338 podStartE2EDuration="3.364091338s" podCreationTimestamp="2026-02-14 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:26.315945743 +0000 UTC m=+1338.341935214" watchObservedRunningTime="2026-02-14 14:14:26.364091338 +0000 UTC m=+1338.390080809" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.415419 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.449498 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.480576 4750 generic.go:334] "Generic (PLEG): container finished" podID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" containerID="4987f543ecd138f730848ca11e139742a3225227593d1c7c826362fef087483b" exitCode=0 Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.480621 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" event={"ID":"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1","Type":"ContainerDied","Data":"4987f543ecd138f730848ca11e139742a3225227593d1c7c826362fef087483b"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.480647 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" event={"ID":"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1","Type":"ContainerStarted","Data":"8883661350f6f277c4d61705bf1dc53bcb54149014b3af84f9fad77c0195a330"} Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.524232 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.726660 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.789161 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a4f540-742b-4f92-9839-b2e5fc1aacfa" path="/var/lib/kubelet/pods/81a4f540-742b-4f92-9839-b2e5fc1aacfa/volumes" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.799759 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.852321 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-svc\") pod \"eb7b571e-71f9-418e-a695-14df6e3956ab\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.852373 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qm9j\" (UniqueName: \"kubernetes.io/projected/eb7b571e-71f9-418e-a695-14df6e3956ab-kube-api-access-5qm9j\") pod \"eb7b571e-71f9-418e-a695-14df6e3956ab\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.852420 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-sb\") pod \"eb7b571e-71f9-418e-a695-14df6e3956ab\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.852451 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-nb\") pod \"eb7b571e-71f9-418e-a695-14df6e3956ab\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.852512 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-swift-storage-0\") pod \"eb7b571e-71f9-418e-a695-14df6e3956ab\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.852613 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-config\") pod \"eb7b571e-71f9-418e-a695-14df6e3956ab\" (UID: \"eb7b571e-71f9-418e-a695-14df6e3956ab\") " Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.872816 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb7b571e-71f9-418e-a695-14df6e3956ab-kube-api-access-5qm9j" (OuterVolumeSpecName: "kube-api-access-5qm9j") pod "eb7b571e-71f9-418e-a695-14df6e3956ab" (UID: "eb7b571e-71f9-418e-a695-14df6e3956ab"). InnerVolumeSpecName "kube-api-access-5qm9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.898861 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb7b571e-71f9-418e-a695-14df6e3956ab" (UID: "eb7b571e-71f9-418e-a695-14df6e3956ab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.909225 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-config" (OuterVolumeSpecName: "config") pod "eb7b571e-71f9-418e-a695-14df6e3956ab" (UID: "eb7b571e-71f9-418e-a695-14df6e3956ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.939742 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb7b571e-71f9-418e-a695-14df6e3956ab" (UID: "eb7b571e-71f9-418e-a695-14df6e3956ab"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.943026 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb7b571e-71f9-418e-a695-14df6e3956ab" (UID: "eb7b571e-71f9-418e-a695-14df6e3956ab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.946506 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb7b571e-71f9-418e-a695-14df6e3956ab" (UID: "eb7b571e-71f9-418e-a695-14df6e3956ab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.954759 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.954788 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qm9j\" (UniqueName: \"kubernetes.io/projected/eb7b571e-71f9-418e-a695-14df6e3956ab-kube-api-access-5qm9j\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.954800 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.954809 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.954846 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:26 crc kubenswrapper[4750]: I0214 14:14:26.954855 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7b571e-71f9-418e-a695-14df6e3956ab-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.500853 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" event={"ID":"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1","Type":"ContainerStarted","Data":"e5c65e47b3f367863d5a39211f94bc71aad771eb5b021b1a8e6c7341d048506b"} Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.501358 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.508899 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" event={"ID":"eb7b571e-71f9-418e-a695-14df6e3956ab","Type":"ContainerDied","Data":"4bccc0b37dea29e8ae6838e8d31e6ba1d8faadd2ff14bfe362aedcf10eb33750"} Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.508934 4750 scope.go:117] "RemoveContainer" containerID="7a7a048a074c216aae78a64fd98a158dcdb0467491c8f45ead118ed208694e34" Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.509031 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-dxfvz" Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.523818 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" podStartSLOduration=4.523793955 podStartE2EDuration="4.523793955s" podCreationTimestamp="2026-02-14 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:27.518404707 +0000 UTC m=+1339.544394188" watchObservedRunningTime="2026-02-14 14:14:27.523793955 +0000 UTC m=+1339.549783436" Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.532590 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"343a15a9-528b-4415-81e1-ab1864b53351","Type":"ContainerStarted","Data":"eb2fd404132bd3991dea833d44fdc919e6fd6fde3443d4b4f0f25755de26d8bc"} Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.540077 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9fab76e3-df30-4482-b841-17005684be5f","Type":"ContainerStarted","Data":"4ec4a59ff54c26aad86462c2f3ef7a8ed431cc83d733341aadd88382dac30cec"} Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.730996 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dxfvz"] Feb 14 14:14:27 crc kubenswrapper[4750]: I0214 14:14:27.770293 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dxfvz"] Feb 14 14:14:28 crc kubenswrapper[4750]: I0214 14:14:28.565285 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9fab76e3-df30-4482-b841-17005684be5f","Type":"ContainerStarted","Data":"b3b34e093ff96cf8640a54ec9095d3ff9b5f5fbbdbf1a92a4843b464e03d7e14"} Feb 14 14:14:28 crc kubenswrapper[4750]: I0214 14:14:28.759084 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb7b571e-71f9-418e-a695-14df6e3956ab" path="/var/lib/kubelet/pods/eb7b571e-71f9-418e-a695-14df6e3956ab/volumes" Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.543155 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.552768 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.584323 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"343a15a9-528b-4415-81e1-ab1864b53351","Type":"ContainerStarted","Data":"cef9bcec3bab4d1f6108a88f7ab2c73e1981084e410d68f83036a3d8930fa2d6"} Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.584489 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="343a15a9-528b-4415-81e1-ab1864b53351" containerName="glance-log" containerID="cri-o://eb2fd404132bd3991dea833d44fdc919e6fd6fde3443d4b4f0f25755de26d8bc" gracePeriod=30 Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.584515 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="343a15a9-528b-4415-81e1-ab1864b53351" containerName="glance-httpd" containerID="cri-o://cef9bcec3bab4d1f6108a88f7ab2c73e1981084e410d68f83036a3d8930fa2d6" gracePeriod=30 Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.595419 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9fab76e3-df30-4482-b841-17005684be5f","Type":"ContainerStarted","Data":"da4d1158d3eb57da2ca6378546bf15260ec78559f7287df463f5799dc1f31045"} Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.595587 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9fab76e3-df30-4482-b841-17005684be5f" containerName="glance-httpd" containerID="cri-o://da4d1158d3eb57da2ca6378546bf15260ec78559f7287df463f5799dc1f31045" gracePeriod=30 Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.595758 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9fab76e3-df30-4482-b841-17005684be5f" containerName="glance-log" containerID="cri-o://b3b34e093ff96cf8640a54ec9095d3ff9b5f5fbbdbf1a92a4843b464e03d7e14" gracePeriod=30 Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.602508 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.615962 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.615943272 podStartE2EDuration="6.615943272s" podCreationTimestamp="2026-02-14 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:29.610460652 +0000 UTC m=+1341.636450133" watchObservedRunningTime="2026-02-14 14:14:29.615943272 +0000 UTC m=+1341.641932753" Feb 14 14:14:29 crc kubenswrapper[4750]: I0214 14:14:29.650420 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.644646156 podStartE2EDuration="6.644646156s" podCreationTimestamp="2026-02-14 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:29.642691912 +0000 UTC m=+1341.668681403" watchObservedRunningTime="2026-02-14 14:14:29.644646156 +0000 UTC m=+1341.670635637" Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.132377 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.132637 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.617421 4750 generic.go:334] "Generic (PLEG): container finished" podID="343a15a9-528b-4415-81e1-ab1864b53351" containerID="cef9bcec3bab4d1f6108a88f7ab2c73e1981084e410d68f83036a3d8930fa2d6" exitCode=0 Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.617727 4750 generic.go:334] "Generic (PLEG): container finished" podID="343a15a9-528b-4415-81e1-ab1864b53351" containerID="eb2fd404132bd3991dea833d44fdc919e6fd6fde3443d4b4f0f25755de26d8bc" exitCode=143 Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.617495 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"343a15a9-528b-4415-81e1-ab1864b53351","Type":"ContainerDied","Data":"cef9bcec3bab4d1f6108a88f7ab2c73e1981084e410d68f83036a3d8930fa2d6"} Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.617828 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"343a15a9-528b-4415-81e1-ab1864b53351","Type":"ContainerDied","Data":"eb2fd404132bd3991dea833d44fdc919e6fd6fde3443d4b4f0f25755de26d8bc"} Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.620337 4750 generic.go:334] "Generic (PLEG): container finished" podID="9fab76e3-df30-4482-b841-17005684be5f" containerID="da4d1158d3eb57da2ca6378546bf15260ec78559f7287df463f5799dc1f31045" exitCode=0 Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.620366 4750 generic.go:334] "Generic (PLEG): container finished" podID="9fab76e3-df30-4482-b841-17005684be5f" containerID="b3b34e093ff96cf8640a54ec9095d3ff9b5f5fbbdbf1a92a4843b464e03d7e14" exitCode=143 Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.620417 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9fab76e3-df30-4482-b841-17005684be5f","Type":"ContainerDied","Data":"da4d1158d3eb57da2ca6378546bf15260ec78559f7287df463f5799dc1f31045"} Feb 14 14:14:30 crc kubenswrapper[4750]: I0214 14:14:30.620444 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9fab76e3-df30-4482-b841-17005684be5f","Type":"ContainerDied","Data":"b3b34e093ff96cf8640a54ec9095d3ff9b5f5fbbdbf1a92a4843b464e03d7e14"} Feb 14 14:14:31 crc kubenswrapper[4750]: I0214 14:14:31.631510 4750 generic.go:334] "Generic (PLEG): container finished" podID="3b504616-21a1-400e-a1c0-152ea8dbcfb5" containerID="ecdea597626f4832e073c8620cb6f7bbe9818e1e86a740215bf00876be871a7e" exitCode=0 Feb 14 14:14:31 crc kubenswrapper[4750]: I0214 14:14:31.631552 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zt5zh" event={"ID":"3b504616-21a1-400e-a1c0-152ea8dbcfb5","Type":"ContainerDied","Data":"ecdea597626f4832e073c8620cb6f7bbe9818e1e86a740215bf00876be871a7e"} Feb 14 14:14:34 crc kubenswrapper[4750]: I0214 14:14:34.099267 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:14:34 crc kubenswrapper[4750]: I0214 14:14:34.207839 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4cmz6"] Feb 14 14:14:34 crc kubenswrapper[4750]: I0214 14:14:34.208129 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-4cmz6" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="dnsmasq-dns" containerID="cri-o://ecf02edf9e0a678876b704cf639f9fc4021f1a6022b8782719129cee8ebe5260" gracePeriod=10 Feb 14 14:14:34 crc kubenswrapper[4750]: I0214 14:14:34.668695 4750 generic.go:334] "Generic (PLEG): container finished" podID="a357c661-43db-45de-af4d-930aa51c9743" containerID="ecf02edf9e0a678876b704cf639f9fc4021f1a6022b8782719129cee8ebe5260" exitCode=0 Feb 14 14:14:34 crc kubenswrapper[4750]: I0214 14:14:34.668809 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4cmz6" event={"ID":"a357c661-43db-45de-af4d-930aa51c9743","Type":"ContainerDied","Data":"ecf02edf9e0a678876b704cf639f9fc4021f1a6022b8782719129cee8ebe5260"} Feb 14 14:14:35 crc kubenswrapper[4750]: I0214 14:14:35.396004 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-4cmz6" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.223024 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.337351 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-scripts\") pod \"343a15a9-528b-4415-81e1-ab1864b53351\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.337413 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-combined-ca-bundle\") pod \"343a15a9-528b-4415-81e1-ab1864b53351\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.337446 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-internal-tls-certs\") pod \"343a15a9-528b-4415-81e1-ab1864b53351\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.337500 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-logs\") pod \"343a15a9-528b-4415-81e1-ab1864b53351\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.337526 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm4q6\" (UniqueName: \"kubernetes.io/projected/343a15a9-528b-4415-81e1-ab1864b53351-kube-api-access-cm4q6\") pod \"343a15a9-528b-4415-81e1-ab1864b53351\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.337556 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-httpd-run\") pod \"343a15a9-528b-4415-81e1-ab1864b53351\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.337705 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"343a15a9-528b-4415-81e1-ab1864b53351\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.337726 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-config-data\") pod \"343a15a9-528b-4415-81e1-ab1864b53351\" (UID: \"343a15a9-528b-4415-81e1-ab1864b53351\") " Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.339516 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "343a15a9-528b-4415-81e1-ab1864b53351" (UID: "343a15a9-528b-4415-81e1-ab1864b53351"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.339825 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-logs" (OuterVolumeSpecName: "logs") pod "343a15a9-528b-4415-81e1-ab1864b53351" (UID: "343a15a9-528b-4415-81e1-ab1864b53351"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.345426 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343a15a9-528b-4415-81e1-ab1864b53351-kube-api-access-cm4q6" (OuterVolumeSpecName: "kube-api-access-cm4q6") pod "343a15a9-528b-4415-81e1-ab1864b53351" (UID: "343a15a9-528b-4415-81e1-ab1864b53351"). InnerVolumeSpecName "kube-api-access-cm4q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.345553 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-scripts" (OuterVolumeSpecName: "scripts") pod "343a15a9-528b-4415-81e1-ab1864b53351" (UID: "343a15a9-528b-4415-81e1-ab1864b53351"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.378570 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b" (OuterVolumeSpecName: "glance") pod "343a15a9-528b-4415-81e1-ab1864b53351" (UID: "343a15a9-528b-4415-81e1-ab1864b53351"). InnerVolumeSpecName "pvc-f8aa6b2a-87f7-4392-adda-43640f80683b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.405855 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-config-data" (OuterVolumeSpecName: "config-data") pod "343a15a9-528b-4415-81e1-ab1864b53351" (UID: "343a15a9-528b-4415-81e1-ab1864b53351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.420396 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "343a15a9-528b-4415-81e1-ab1864b53351" (UID: "343a15a9-528b-4415-81e1-ab1864b53351"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.423400 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "343a15a9-528b-4415-81e1-ab1864b53351" (UID: "343a15a9-528b-4415-81e1-ab1864b53351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.439810 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.439849 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm4q6\" (UniqueName: \"kubernetes.io/projected/343a15a9-528b-4415-81e1-ab1864b53351-kube-api-access-cm4q6\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.439859 4750 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/343a15a9-528b-4415-81e1-ab1864b53351-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.439895 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") on node \"crc\" " Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.439905 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.439914 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.439922 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.439931 4750 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343a15a9-528b-4415-81e1-ab1864b53351-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.470561 4750 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.470694 4750 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f8aa6b2a-87f7-4392-adda-43640f80683b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b") on node "crc" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.541498 4750 reconciler_common.go:293] "Volume detached for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.700853 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"343a15a9-528b-4415-81e1-ab1864b53351","Type":"ContainerDied","Data":"3e60bed3b34b96f3ccc6bfedb8a542ff5c40ff69c2bbea53ec5a328c55dc0162"} Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.700928 4750 scope.go:117] "RemoveContainer" containerID="cef9bcec3bab4d1f6108a88f7ab2c73e1981084e410d68f83036a3d8930fa2d6" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.701094 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.781244 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.797601 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.810198 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:14:36 crc kubenswrapper[4750]: E0214 14:14:36.811136 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb7b571e-71f9-418e-a695-14df6e3956ab" containerName="init" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.811158 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb7b571e-71f9-418e-a695-14df6e3956ab" containerName="init" Feb 14 14:14:36 crc kubenswrapper[4750]: E0214 14:14:36.811194 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343a15a9-528b-4415-81e1-ab1864b53351" containerName="glance-httpd" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.811214 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="343a15a9-528b-4415-81e1-ab1864b53351" containerName="glance-httpd" Feb 14 14:14:36 crc kubenswrapper[4750]: E0214 14:14:36.811239 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343a15a9-528b-4415-81e1-ab1864b53351" containerName="glance-log" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.811246 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="343a15a9-528b-4415-81e1-ab1864b53351" containerName="glance-log" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.812452 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7b571e-71f9-418e-a695-14df6e3956ab" containerName="init" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.812486 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="343a15a9-528b-4415-81e1-ab1864b53351" containerName="glance-httpd" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.812499 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="343a15a9-528b-4415-81e1-ab1864b53351" containerName="glance-log" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.814267 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.816693 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.817822 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.823462 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.952463 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.952548 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-logs\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.952727 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7n48\" (UniqueName: \"kubernetes.io/projected/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-kube-api-access-v7n48\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.952859 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.952900 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.952940 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.952999 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:36 crc kubenswrapper[4750]: I0214 14:14:36.953098 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.054745 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-logs\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.054840 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7n48\" (UniqueName: \"kubernetes.io/projected/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-kube-api-access-v7n48\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.054911 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.054940 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.054988 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.055035 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.055174 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.055327 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.057475 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.058919 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-logs\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.060302 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.060344 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4116c04a4908c386919762e10ff51e59035632215c786316944928532f707927/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.060845 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.061799 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.063861 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.066229 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.077701 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7n48\" (UniqueName: \"kubernetes.io/projected/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-kube-api-access-v7n48\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.116195 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:14:37 crc kubenswrapper[4750]: I0214 14:14:37.141061 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:14:38 crc kubenswrapper[4750]: I0214 14:14:38.758615 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343a15a9-528b-4415-81e1-ab1864b53351" path="/var/lib/kubelet/pods/343a15a9-528b-4415-81e1-ab1864b53351/volumes" Feb 14 14:14:39 crc kubenswrapper[4750]: E0214 14:14:39.477063 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 14 14:14:39 crc kubenswrapper[4750]: E0214 14:14:39.477322 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n555h688h5c5h5d5h668h58ch65fh59fh5b9h9chffh656h5b7h699h688h67fh587h5fbh655h5f5hd9h68dhd4h649h5fbhd8h598h65dh54fh6dh58bh56dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6dlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(af719ce6-033d-4e45-8502-cc8ee8f091c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:14:40 crc kubenswrapper[4750]: I0214 14:14:40.395638 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-4cmz6" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.372900 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.378771 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.513909 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmm87\" (UniqueName: \"kubernetes.io/projected/9fab76e3-df30-4482-b841-17005684be5f-kube-api-access-wmm87\") pod \"9fab76e3-df30-4482-b841-17005684be5f\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.513972 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-config-data\") pod \"9fab76e3-df30-4482-b841-17005684be5f\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.514211 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"9fab76e3-df30-4482-b841-17005684be5f\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.514235 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6fw7\" (UniqueName: \"kubernetes.io/projected/3b504616-21a1-400e-a1c0-152ea8dbcfb5-kube-api-access-f6fw7\") pod \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.514253 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-combined-ca-bundle\") pod \"9fab76e3-df30-4482-b841-17005684be5f\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.514562 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-scripts\") pod \"9fab76e3-df30-4482-b841-17005684be5f\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.515081 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-config-data\") pod \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.515234 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-combined-ca-bundle\") pod \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.515294 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-credential-keys\") pod \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.515332 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-logs\") pod \"9fab76e3-df30-4482-b841-17005684be5f\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.515377 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-public-tls-certs\") pod \"9fab76e3-df30-4482-b841-17005684be5f\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.515396 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-fernet-keys\") pod \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.515488 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-scripts\") pod \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\" (UID: \"3b504616-21a1-400e-a1c0-152ea8dbcfb5\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.515541 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-httpd-run\") pod \"9fab76e3-df30-4482-b841-17005684be5f\" (UID: \"9fab76e3-df30-4482-b841-17005684be5f\") " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.515939 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-logs" (OuterVolumeSpecName: "logs") pod "9fab76e3-df30-4482-b841-17005684be5f" (UID: "9fab76e3-df30-4482-b841-17005684be5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.519683 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.520614 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fab76e3-df30-4482-b841-17005684be5f-kube-api-access-wmm87" (OuterVolumeSpecName: "kube-api-access-wmm87") pod "9fab76e3-df30-4482-b841-17005684be5f" (UID: "9fab76e3-df30-4482-b841-17005684be5f"). InnerVolumeSpecName "kube-api-access-wmm87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.521227 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3b504616-21a1-400e-a1c0-152ea8dbcfb5" (UID: "3b504616-21a1-400e-a1c0-152ea8dbcfb5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.522408 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-scripts" (OuterVolumeSpecName: "scripts") pod "3b504616-21a1-400e-a1c0-152ea8dbcfb5" (UID: "3b504616-21a1-400e-a1c0-152ea8dbcfb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.522871 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-scripts" (OuterVolumeSpecName: "scripts") pod "9fab76e3-df30-4482-b841-17005684be5f" (UID: "9fab76e3-df30-4482-b841-17005684be5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.522973 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b504616-21a1-400e-a1c0-152ea8dbcfb5-kube-api-access-f6fw7" (OuterVolumeSpecName: "kube-api-access-f6fw7") pod "3b504616-21a1-400e-a1c0-152ea8dbcfb5" (UID: "3b504616-21a1-400e-a1c0-152ea8dbcfb5"). InnerVolumeSpecName "kube-api-access-f6fw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.523377 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9fab76e3-df30-4482-b841-17005684be5f" (UID: "9fab76e3-df30-4482-b841-17005684be5f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.534135 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3b504616-21a1-400e-a1c0-152ea8dbcfb5" (UID: "3b504616-21a1-400e-a1c0-152ea8dbcfb5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.559214 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe" (OuterVolumeSpecName: "glance") pod "9fab76e3-df30-4482-b841-17005684be5f" (UID: "9fab76e3-df30-4482-b841-17005684be5f"). InnerVolumeSpecName "pvc-27549279-a7a4-4066-b282-15513a49b9fe". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.565572 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-config-data" (OuterVolumeSpecName: "config-data") pod "3b504616-21a1-400e-a1c0-152ea8dbcfb5" (UID: "3b504616-21a1-400e-a1c0-152ea8dbcfb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.576260 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b504616-21a1-400e-a1c0-152ea8dbcfb5" (UID: "3b504616-21a1-400e-a1c0-152ea8dbcfb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.581229 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fab76e3-df30-4482-b841-17005684be5f" (UID: "9fab76e3-df30-4482-b841-17005684be5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.604425 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-config-data" (OuterVolumeSpecName: "config-data") pod "9fab76e3-df30-4482-b841-17005684be5f" (UID: "9fab76e3-df30-4482-b841-17005684be5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.614104 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9fab76e3-df30-4482-b841-17005684be5f" (UID: "9fab76e3-df30-4482-b841-17005684be5f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621446 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") on node \"crc\" " Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621476 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6fw7\" (UniqueName: \"kubernetes.io/projected/3b504616-21a1-400e-a1c0-152ea8dbcfb5-kube-api-access-f6fw7\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621488 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621497 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621510 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621518 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621528 4750 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621537 4750 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621545 4750 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621553 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b504616-21a1-400e-a1c0-152ea8dbcfb5-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621561 4750 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fab76e3-df30-4482-b841-17005684be5f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621569 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmm87\" (UniqueName: \"kubernetes.io/projected/9fab76e3-df30-4482-b841-17005684be5f-kube-api-access-wmm87\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.621577 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab76e3-df30-4482-b841-17005684be5f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.648071 4750 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.648212 4750 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-27549279-a7a4-4066-b282-15513a49b9fe" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe") on node "crc" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.723931 4750 reconciler_common.go:293] "Volume detached for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.793150 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zt5zh" event={"ID":"3b504616-21a1-400e-a1c0-152ea8dbcfb5","Type":"ContainerDied","Data":"ac96f7b8906e7eed72cd6993ec5fa87adefb6f3ba361fb1bc03a873efb2278ef"} Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.793424 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac96f7b8906e7eed72cd6993ec5fa87adefb6f3ba361fb1bc03a873efb2278ef" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.793187 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zt5zh" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.795182 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9fab76e3-df30-4482-b841-17005684be5f","Type":"ContainerDied","Data":"4ec4a59ff54c26aad86462c2f3ef7a8ed431cc83d733341aadd88382dac30cec"} Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.795198 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.843288 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.859987 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.870415 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:14:43 crc kubenswrapper[4750]: E0214 14:14:43.870864 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fab76e3-df30-4482-b841-17005684be5f" containerName="glance-httpd" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.870881 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fab76e3-df30-4482-b841-17005684be5f" containerName="glance-httpd" Feb 14 14:14:43 crc kubenswrapper[4750]: E0214 14:14:43.870918 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b504616-21a1-400e-a1c0-152ea8dbcfb5" containerName="keystone-bootstrap" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.870927 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b504616-21a1-400e-a1c0-152ea8dbcfb5" containerName="keystone-bootstrap" Feb 14 14:14:43 crc kubenswrapper[4750]: E0214 14:14:43.870941 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fab76e3-df30-4482-b841-17005684be5f" containerName="glance-log" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.870948 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fab76e3-df30-4482-b841-17005684be5f" containerName="glance-log" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.871177 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fab76e3-df30-4482-b841-17005684be5f" containerName="glance-httpd" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.871192 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fab76e3-df30-4482-b841-17005684be5f" containerName="glance-log" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.871201 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b504616-21a1-400e-a1c0-152ea8dbcfb5" containerName="keystone-bootstrap" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.872321 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.875513 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.875723 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 14:14:43 crc kubenswrapper[4750]: I0214 14:14:43.893691 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.030630 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.030688 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.030713 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.031082 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-logs\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.031274 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.031519 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.031616 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.031698 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lsz4\" (UniqueName: \"kubernetes.io/projected/9e9f3055-374a-4522-ac07-4f8d55550521-kube-api-access-6lsz4\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.133664 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.133730 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.133760 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lsz4\" (UniqueName: \"kubernetes.io/projected/9e9f3055-374a-4522-ac07-4f8d55550521-kube-api-access-6lsz4\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.133801 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.133823 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.133840 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.134738 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-logs\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.134794 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.135155 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.135358 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-logs\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.135793 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.135870 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/01be76f23f3db65527afd99bd3e4627b46df2a8504625bc2942280577a76cd42/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.138018 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.138666 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.139948 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.140050 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.148427 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lsz4\" (UniqueName: \"kubernetes.io/projected/9e9f3055-374a-4522-ac07-4f8d55550521-kube-api-access-6lsz4\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.172098 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.196659 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.526861 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zt5zh"] Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.536825 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zt5zh"] Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.617759 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x9pkr"] Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.619278 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.621201 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.621480 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.621695 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vtfpt" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.622769 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.622896 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.629808 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x9pkr"] Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.754911 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b504616-21a1-400e-a1c0-152ea8dbcfb5" path="/var/lib/kubelet/pods/3b504616-21a1-400e-a1c0-152ea8dbcfb5/volumes" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.755708 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fab76e3-df30-4482-b841-17005684be5f" path="/var/lib/kubelet/pods/9fab76e3-df30-4482-b841-17005684be5f/volumes" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.761991 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-config-data\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.762169 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-fernet-keys\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.762245 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-credential-keys\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.762281 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7cj9\" (UniqueName: \"kubernetes.io/projected/9237972d-c2ac-4ee2-a178-6a750be9c50a-kube-api-access-f7cj9\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.762370 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-scripts\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.762822 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-combined-ca-bundle\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.864712 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-combined-ca-bundle\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.865023 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-config-data\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.865205 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-fernet-keys\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.865328 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-credential-keys\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.865424 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7cj9\" (UniqueName: \"kubernetes.io/projected/9237972d-c2ac-4ee2-a178-6a750be9c50a-kube-api-access-f7cj9\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.865550 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-scripts\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.878336 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-fernet-keys\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.878465 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-combined-ca-bundle\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.880440 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-credential-keys\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.881033 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-config-data\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.886276 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-scripts\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.886328 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7cj9\" (UniqueName: \"kubernetes.io/projected/9237972d-c2ac-4ee2-a178-6a750be9c50a-kube-api-access-f7cj9\") pod \"keystone-bootstrap-x9pkr\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:44 crc kubenswrapper[4750]: I0214 14:14:44.943897 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:14:45 crc kubenswrapper[4750]: I0214 14:14:45.396639 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-4cmz6" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 14 14:14:45 crc kubenswrapper[4750]: I0214 14:14:45.396777 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:14:46 crc kubenswrapper[4750]: I0214 14:14:46.828793 4750 generic.go:334] "Generic (PLEG): container finished" podID="8f1af0ef-9644-45bb-9c3f-c05350b180e8" containerID="8bc668725fd238ae39dd420ee5cc6f70476f2ea74225ef1662373364142c1a09" exitCode=0 Feb 14 14:14:46 crc kubenswrapper[4750]: I0214 14:14:46.828903 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nksfx" event={"ID":"8f1af0ef-9644-45bb-9c3f-c05350b180e8","Type":"ContainerDied","Data":"8bc668725fd238ae39dd420ee5cc6f70476f2ea74225ef1662373364142c1a09"} Feb 14 14:14:50 crc kubenswrapper[4750]: I0214 14:14:50.395717 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-4cmz6" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 14 14:14:52 crc kubenswrapper[4750]: E0214 14:14:52.014686 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 14 14:14:52 crc kubenswrapper[4750]: E0214 14:14:52.015263 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtll7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6b4rh_openstack(1d67ec74-2a65-494e-a768-5cc6e6714e49): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:14:52 crc kubenswrapper[4750]: E0214 14:14:52.016440 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6b4rh" podUID="1d67ec74-2a65-494e-a768-5cc6e6714e49" Feb 14 14:14:52 crc kubenswrapper[4750]: E0214 14:14:52.272298 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 14 14:14:52 crc kubenswrapper[4750]: E0214 14:14:52.272485 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btvcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-gdzkt_openstack(2b46a12b-a34f-4850-a1b8-a764ba798764): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:14:52 crc kubenswrapper[4750]: E0214 14:14:52.273772 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-gdzkt" podUID="2b46a12b-a34f-4850-a1b8-a764ba798764" Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.369899 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.543923 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24rrk\" (UniqueName: \"kubernetes.io/projected/8f1af0ef-9644-45bb-9c3f-c05350b180e8-kube-api-access-24rrk\") pod \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.544249 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-combined-ca-bundle\") pod \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.544612 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-config\") pod \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.562394 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1af0ef-9644-45bb-9c3f-c05350b180e8-kube-api-access-24rrk" (OuterVolumeSpecName: "kube-api-access-24rrk") pod "8f1af0ef-9644-45bb-9c3f-c05350b180e8" (UID: "8f1af0ef-9644-45bb-9c3f-c05350b180e8"). InnerVolumeSpecName "kube-api-access-24rrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.641315 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-config" (OuterVolumeSpecName: "config") pod "8f1af0ef-9644-45bb-9c3f-c05350b180e8" (UID: "8f1af0ef-9644-45bb-9c3f-c05350b180e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.647392 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f1af0ef-9644-45bb-9c3f-c05350b180e8" (UID: "8f1af0ef-9644-45bb-9c3f-c05350b180e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.647993 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-combined-ca-bundle\") pod \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\" (UID: \"8f1af0ef-9644-45bb-9c3f-c05350b180e8\") " Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.648672 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.648698 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24rrk\" (UniqueName: \"kubernetes.io/projected/8f1af0ef-9644-45bb-9c3f-c05350b180e8-kube-api-access-24rrk\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:52 crc kubenswrapper[4750]: W0214 14:14:52.648799 4750 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8f1af0ef-9644-45bb-9c3f-c05350b180e8/volumes/kubernetes.io~secret/combined-ca-bundle Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.648818 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f1af0ef-9644-45bb-9c3f-c05350b180e8" (UID: "8f1af0ef-9644-45bb-9c3f-c05350b180e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.752021 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1af0ef-9644-45bb-9c3f-c05350b180e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.909306 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nksfx" event={"ID":"8f1af0ef-9644-45bb-9c3f-c05350b180e8","Type":"ContainerDied","Data":"053993846c86f9d71b5106f655c9b61c2b62199e941c3cc469be7de3ee170d3b"} Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.909374 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053993846c86f9d71b5106f655c9b61c2b62199e941c3cc469be7de3ee170d3b" Feb 14 14:14:52 crc kubenswrapper[4750]: I0214 14:14:52.909410 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nksfx" Feb 14 14:14:52 crc kubenswrapper[4750]: E0214 14:14:52.911277 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-gdzkt" podUID="2b46a12b-a34f-4850-a1b8-a764ba798764" Feb 14 14:14:52 crc kubenswrapper[4750]: E0214 14:14:52.912237 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-6b4rh" podUID="1d67ec74-2a65-494e-a768-5cc6e6714e49" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.564827 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crnrm"] Feb 14 14:14:53 crc kubenswrapper[4750]: E0214 14:14:53.565736 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1af0ef-9644-45bb-9c3f-c05350b180e8" containerName="neutron-db-sync" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.565756 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1af0ef-9644-45bb-9c3f-c05350b180e8" containerName="neutron-db-sync" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.566077 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1af0ef-9644-45bb-9c3f-c05350b180e8" containerName="neutron-db-sync" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.567614 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.583524 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crnrm"] Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.664174 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-789b68c888-x8zqh"] Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.666057 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.669582 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zq6lj" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.669832 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.669966 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.670063 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.680914 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-789b68c888-x8zqh"] Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.681518 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.681569 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6rtj\" (UniqueName: \"kubernetes.io/projected/e30389ba-d985-4b42-b348-c8f0e8109e09-kube-api-access-v6rtj\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.681605 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.681645 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.681666 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-svc\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.682068 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-config\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784158 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-config\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784275 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784302 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-httpd-config\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784339 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6rtj\" (UniqueName: \"kubernetes.io/projected/e30389ba-d985-4b42-b348-c8f0e8109e09-kube-api-access-v6rtj\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784372 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-config\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784402 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784426 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-ovndb-tls-certs\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784480 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784528 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-svc\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784621 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-combined-ca-bundle\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.784689 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnd4\" (UniqueName: \"kubernetes.io/projected/925a32d5-f7e2-4339-84ea-a5011abb19f1-kube-api-access-7dnd4\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.785139 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-config\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.785189 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.785758 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.785819 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.788164 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-svc\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.813845 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6rtj\" (UniqueName: \"kubernetes.io/projected/e30389ba-d985-4b42-b348-c8f0e8109e09-kube-api-access-v6rtj\") pod \"dnsmasq-dns-55f844cf75-crnrm\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.886933 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-ovndb-tls-certs\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.887082 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-combined-ca-bundle\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.887141 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnd4\" (UniqueName: \"kubernetes.io/projected/925a32d5-f7e2-4339-84ea-a5011abb19f1-kube-api-access-7dnd4\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.887283 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-httpd-config\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.887374 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-config\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.891689 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-ovndb-tls-certs\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.891976 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-config\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.894884 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-combined-ca-bundle\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.895612 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.897706 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-httpd-config\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.907658 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnd4\" (UniqueName: \"kubernetes.io/projected/925a32d5-f7e2-4339-84ea-a5011abb19f1-kube-api-access-7dnd4\") pod \"neutron-789b68c888-x8zqh\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:53 crc kubenswrapper[4750]: I0214 14:14:53.990897 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.196000 4750 scope.go:117] "RemoveContainer" containerID="eb2fd404132bd3991dea833d44fdc919e6fd6fde3443d4b4f0f25755de26d8bc" Feb 14 14:14:54 crc kubenswrapper[4750]: E0214 14:14:54.224088 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 14 14:14:54 crc kubenswrapper[4750]: E0214 14:14:54.224516 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xql4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-n6jc5_openstack(6c96aa35-ddbc-4485-ac13-a2f08de1dd28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:14:54 crc kubenswrapper[4750]: E0214 14:14:54.225795 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-n6jc5" podUID="6c96aa35-ddbc-4485-ac13-a2f08de1dd28" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.286997 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.398832 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-config\") pod \"a357c661-43db-45de-af4d-930aa51c9743\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.401427 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpm7h\" (UniqueName: \"kubernetes.io/projected/a357c661-43db-45de-af4d-930aa51c9743-kube-api-access-cpm7h\") pod \"a357c661-43db-45de-af4d-930aa51c9743\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.401466 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-nb\") pod \"a357c661-43db-45de-af4d-930aa51c9743\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.401640 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-dns-svc\") pod \"a357c661-43db-45de-af4d-930aa51c9743\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.401738 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-sb\") pod \"a357c661-43db-45de-af4d-930aa51c9743\" (UID: \"a357c661-43db-45de-af4d-930aa51c9743\") " Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.407659 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a357c661-43db-45de-af4d-930aa51c9743-kube-api-access-cpm7h" (OuterVolumeSpecName: "kube-api-access-cpm7h") pod "a357c661-43db-45de-af4d-930aa51c9743" (UID: "a357c661-43db-45de-af4d-930aa51c9743"). InnerVolumeSpecName "kube-api-access-cpm7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.459862 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a357c661-43db-45de-af4d-930aa51c9743" (UID: "a357c661-43db-45de-af4d-930aa51c9743"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.471214 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a357c661-43db-45de-af4d-930aa51c9743" (UID: "a357c661-43db-45de-af4d-930aa51c9743"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.475135 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-config" (OuterVolumeSpecName: "config") pod "a357c661-43db-45de-af4d-930aa51c9743" (UID: "a357c661-43db-45de-af4d-930aa51c9743"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.505610 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.505937 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.506027 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpm7h\" (UniqueName: \"kubernetes.io/projected/a357c661-43db-45de-af4d-930aa51c9743-kube-api-access-cpm7h\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.506120 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.508605 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a357c661-43db-45de-af4d-930aa51c9743" (UID: "a357c661-43db-45de-af4d-930aa51c9743"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.608457 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a357c661-43db-45de-af4d-930aa51c9743-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.855546 4750 scope.go:117] "RemoveContainer" containerID="da4d1158d3eb57da2ca6378546bf15260ec78559f7287df463f5799dc1f31045" Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.936956 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4cmz6" event={"ID":"a357c661-43db-45de-af4d-930aa51c9743","Type":"ContainerDied","Data":"2261f6ab5da71eb79a282d5f2e854d2dad2a0e4bfcc7364f2a9275da1038261f"} Feb 14 14:14:54 crc kubenswrapper[4750]: I0214 14:14:54.937307 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-4cmz6" Feb 14 14:14:54 crc kubenswrapper[4750]: E0214 14:14:54.954220 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-n6jc5" podUID="6c96aa35-ddbc-4485-ac13-a2f08de1dd28" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.124404 4750 scope.go:117] "RemoveContainer" containerID="b3b34e093ff96cf8640a54ec9095d3ff9b5f5fbbdbf1a92a4843b464e03d7e14" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.156340 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4cmz6"] Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.168392 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4cmz6"] Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.169631 4750 scope.go:117] "RemoveContainer" containerID="ecf02edf9e0a678876b704cf639f9fc4021f1a6022b8782719129cee8ebe5260" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.194919 4750 scope.go:117] "RemoveContainer" containerID="b80c67ab0f3baa5c5b7b2899085c082ccb8b9b0e5cf8d1941b9936862d0ccccf" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.493896 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x9pkr"] Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.671227 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-548b64445c-bzzcp"] Feb 14 14:14:55 crc kubenswrapper[4750]: E0214 14:14:55.672102 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="dnsmasq-dns" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.672273 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="dnsmasq-dns" Feb 14 14:14:55 crc kubenswrapper[4750]: E0214 14:14:55.672286 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="init" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.672292 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="init" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.672551 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="a357c661-43db-45de-af4d-930aa51c9743" containerName="dnsmasq-dns" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.674222 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.685520 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.685736 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.697619 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-548b64445c-bzzcp"] Feb 14 14:14:55 crc kubenswrapper[4750]: W0214 14:14:55.729802 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98f056b9_dcdd_4ef7_99b8_51e5eca665d8.slice/crio-de1308b40e2b1abd7185b225527685f9735b868b8e20ee888d4636f72480a2af WatchSource:0}: Error finding container de1308b40e2b1abd7185b225527685f9735b868b8e20ee888d4636f72480a2af: Status 404 returned error can't find the container with id de1308b40e2b1abd7185b225527685f9735b868b8e20ee888d4636f72480a2af Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.747739 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.761332 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-config\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.761428 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-ovndb-tls-certs\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.761461 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-httpd-config\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.761575 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-internal-tls-certs\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.761631 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwznd\" (UniqueName: \"kubernetes.io/projected/48bdd38e-f66d-438d-9303-17fa1c34cf74-kube-api-access-nwznd\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.761736 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-public-tls-certs\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.761760 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-combined-ca-bundle\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.807887 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crnrm"] Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.864493 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-public-tls-certs\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.864529 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-combined-ca-bundle\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.864631 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-config\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.864669 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-ovndb-tls-certs\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.864688 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-httpd-config\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.864727 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-internal-tls-certs\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.864755 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwznd\" (UniqueName: \"kubernetes.io/projected/48bdd38e-f66d-438d-9303-17fa1c34cf74-kube-api-access-nwznd\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.870701 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-ovndb-tls-certs\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.871830 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-config\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.872326 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-httpd-config\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.873295 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-public-tls-certs\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.878274 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-combined-ca-bundle\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.878292 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-internal-tls-certs\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.889809 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwznd\" (UniqueName: \"kubernetes.io/projected/48bdd38e-f66d-438d-9303-17fa1c34cf74-kube-api-access-nwznd\") pod \"neutron-548b64445c-bzzcp\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.908081 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.993266 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rrlxq" event={"ID":"3bbc77b3-5108-40f8-b057-18e305e8f8ba","Type":"ContainerStarted","Data":"bfd06cca322d29109d5521ce37946eca2879e4407a54df6bb2f3dc3e49fc1cc7"} Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.994759 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"98f056b9-dcdd-4ef7-99b8-51e5eca665d8","Type":"ContainerStarted","Data":"de1308b40e2b1abd7185b225527685f9735b868b8e20ee888d4636f72480a2af"} Feb 14 14:14:55 crc kubenswrapper[4750]: I0214 14:14:55.997284 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af719ce6-033d-4e45-8502-cc8ee8f091c0","Type":"ContainerStarted","Data":"0a293c585ee781e59bd34da2f1248f7689f07434f283f35826728f35e7bdbb80"} Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.001340 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9pkr" event={"ID":"9237972d-c2ac-4ee2-a178-6a750be9c50a","Type":"ContainerStarted","Data":"33bd9c23feb84763ec68aff067af462814774d271bb4c3e2ac17fd3ba5167564"} Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.001371 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9pkr" event={"ID":"9237972d-c2ac-4ee2-a178-6a750be9c50a","Type":"ContainerStarted","Data":"d8fe7dac379133b8a2071446ca0a15c77ec76a80662285b8dc7e15ed49089ec4"} Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.005035 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" event={"ID":"e30389ba-d985-4b42-b348-c8f0e8109e09","Type":"ContainerStarted","Data":"bb93a1924be9885db2ed04b327a7a39c539853d5fbb9bbf9cd6324176373db6c"} Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.005866 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e9f3055-374a-4522-ac07-4f8d55550521","Type":"ContainerStarted","Data":"c6dacb976bf04a10c146349933601e19837180261dfea64b5e4ab804c262da5b"} Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.030807 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.036477 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rrlxq" podStartSLOduration=4.286312258 podStartE2EDuration="33.03646089s" podCreationTimestamp="2026-02-14 14:14:23 +0000 UTC" firstStartedPulling="2026-02-14 14:14:25.461243387 +0000 UTC m=+1337.487232868" lastFinishedPulling="2026-02-14 14:14:54.211392019 +0000 UTC m=+1366.237381500" observedRunningTime="2026-02-14 14:14:56.01633062 +0000 UTC m=+1368.042320091" watchObservedRunningTime="2026-02-14 14:14:56.03646089 +0000 UTC m=+1368.062450371" Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.044343 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x9pkr" podStartSLOduration=12.044324185 podStartE2EDuration="12.044324185s" podCreationTimestamp="2026-02-14 14:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:56.035019941 +0000 UTC m=+1368.061009422" watchObservedRunningTime="2026-02-14 14:14:56.044324185 +0000 UTC m=+1368.070313666" Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.537213 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-789b68c888-x8zqh"] Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.738029 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-548b64445c-bzzcp"] Feb 14 14:14:56 crc kubenswrapper[4750]: I0214 14:14:56.759817 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a357c661-43db-45de-af4d-930aa51c9743" path="/var/lib/kubelet/pods/a357c661-43db-45de-af4d-930aa51c9743/volumes" Feb 14 14:14:56 crc kubenswrapper[4750]: W0214 14:14:56.764672 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48bdd38e_f66d_438d_9303_17fa1c34cf74.slice/crio-b1bf2a7318fadb4916b2205d2d7a9fd615021dd00b8141ef7d18a04ce78f815b WatchSource:0}: Error finding container b1bf2a7318fadb4916b2205d2d7a9fd615021dd00b8141ef7d18a04ce78f815b: Status 404 returned error can't find the container with id b1bf2a7318fadb4916b2205d2d7a9fd615021dd00b8141ef7d18a04ce78f815b Feb 14 14:14:57 crc kubenswrapper[4750]: I0214 14:14:57.041444 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"98f056b9-dcdd-4ef7-99b8-51e5eca665d8","Type":"ContainerStarted","Data":"24931750f8871d87006403489c140a3a45c0838df9d567df9bd51dd32cb9f494"} Feb 14 14:14:57 crc kubenswrapper[4750]: I0214 14:14:57.044734 4750 generic.go:334] "Generic (PLEG): container finished" podID="e30389ba-d985-4b42-b348-c8f0e8109e09" containerID="5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e" exitCode=0 Feb 14 14:14:57 crc kubenswrapper[4750]: I0214 14:14:57.044813 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" event={"ID":"e30389ba-d985-4b42-b348-c8f0e8109e09","Type":"ContainerDied","Data":"5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e"} Feb 14 14:14:57 crc kubenswrapper[4750]: I0214 14:14:57.053861 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-789b68c888-x8zqh" event={"ID":"925a32d5-f7e2-4339-84ea-a5011abb19f1","Type":"ContainerStarted","Data":"f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee"} Feb 14 14:14:57 crc kubenswrapper[4750]: I0214 14:14:57.053911 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-789b68c888-x8zqh" event={"ID":"925a32d5-f7e2-4339-84ea-a5011abb19f1","Type":"ContainerStarted","Data":"5e6df20013daa610569e80ab673223f684ab4fdf6f09ed2b9d71bb30331de28a"} Feb 14 14:14:57 crc kubenswrapper[4750]: I0214 14:14:57.057885 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e9f3055-374a-4522-ac07-4f8d55550521","Type":"ContainerStarted","Data":"bf6c6268f246dc15845d8dcf1509ad49e2366bf737afe42c69193cbe3c3c7e44"} Feb 14 14:14:57 crc kubenswrapper[4750]: I0214 14:14:57.087008 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548b64445c-bzzcp" event={"ID":"48bdd38e-f66d-438d-9303-17fa1c34cf74","Type":"ContainerStarted","Data":"b1bf2a7318fadb4916b2205d2d7a9fd615021dd00b8141ef7d18a04ce78f815b"} Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.117668 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548b64445c-bzzcp" event={"ID":"48bdd38e-f66d-438d-9303-17fa1c34cf74","Type":"ContainerStarted","Data":"07da7944871d7e2e574aa69ce678e0c729338206b6fc3d308302fd9df64e9bc3"} Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.118177 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548b64445c-bzzcp" event={"ID":"48bdd38e-f66d-438d-9303-17fa1c34cf74","Type":"ContainerStarted","Data":"1d3d467e9a891a1c0c8dabd1c4ff606580b2f322b32310c827f3c2cd55817c27"} Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.119381 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.127883 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"98f056b9-dcdd-4ef7-99b8-51e5eca665d8","Type":"ContainerStarted","Data":"56e7c5366cb961c2a611d76be1397150ba751cb3ec931f4b0530edd82af5b192"} Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.134218 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" event={"ID":"e30389ba-d985-4b42-b348-c8f0e8109e09","Type":"ContainerStarted","Data":"9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d"} Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.135099 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.141653 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-789b68c888-x8zqh" event={"ID":"925a32d5-f7e2-4339-84ea-a5011abb19f1","Type":"ContainerStarted","Data":"682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab"} Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.142519 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.146278 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e9f3055-374a-4522-ac07-4f8d55550521","Type":"ContainerStarted","Data":"dc95b3e8dedffcd3a1d7f033d7f682e5cc019910dd3996c0c865a43945ed6df9"} Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.167455 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-548b64445c-bzzcp" podStartSLOduration=3.167433828 podStartE2EDuration="3.167433828s" podCreationTimestamp="2026-02-14 14:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:58.159626884 +0000 UTC m=+1370.185616365" watchObservedRunningTime="2026-02-14 14:14:58.167433828 +0000 UTC m=+1370.193423309" Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.235611 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-789b68c888-x8zqh" podStartSLOduration=5.235594459 podStartE2EDuration="5.235594459s" podCreationTimestamp="2026-02-14 14:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:58.235404444 +0000 UTC m=+1370.261393925" watchObservedRunningTime="2026-02-14 14:14:58.235594459 +0000 UTC m=+1370.261583940" Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.238684 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=22.238670954 podStartE2EDuration="22.238670954s" podCreationTimestamp="2026-02-14 14:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:58.20741593 +0000 UTC m=+1370.233405411" watchObservedRunningTime="2026-02-14 14:14:58.238670954 +0000 UTC m=+1370.264660435" Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.283373 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" podStartSLOduration=5.283350314 podStartE2EDuration="5.283350314s" podCreationTimestamp="2026-02-14 14:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:58.270457372 +0000 UTC m=+1370.296446853" watchObservedRunningTime="2026-02-14 14:14:58.283350314 +0000 UTC m=+1370.309339795" Feb 14 14:14:58 crc kubenswrapper[4750]: I0214 14:14:58.309528 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=15.309510358 podStartE2EDuration="15.309510358s" podCreationTimestamp="2026-02-14 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:14:58.296800101 +0000 UTC m=+1370.322789582" watchObservedRunningTime="2026-02-14 14:14:58.309510358 +0000 UTC m=+1370.335499839" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.128805 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.129410 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.139341 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks"] Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.140605 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.144499 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.145074 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.164494 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks"] Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.204196 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swn6m\" (UniqueName: \"kubernetes.io/projected/46a812c0-eacc-43bd-822e-95d43a8882e3-kube-api-access-swn6m\") pod \"collect-profiles-29517975-8rdks\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.204330 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46a812c0-eacc-43bd-822e-95d43a8882e3-config-volume\") pod \"collect-profiles-29517975-8rdks\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.204391 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46a812c0-eacc-43bd-822e-95d43a8882e3-secret-volume\") pod \"collect-profiles-29517975-8rdks\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.309486 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46a812c0-eacc-43bd-822e-95d43a8882e3-config-volume\") pod \"collect-profiles-29517975-8rdks\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.309795 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46a812c0-eacc-43bd-822e-95d43a8882e3-secret-volume\") pod \"collect-profiles-29517975-8rdks\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.310136 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swn6m\" (UniqueName: \"kubernetes.io/projected/46a812c0-eacc-43bd-822e-95d43a8882e3-kube-api-access-swn6m\") pod \"collect-profiles-29517975-8rdks\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.313963 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46a812c0-eacc-43bd-822e-95d43a8882e3-config-volume\") pod \"collect-profiles-29517975-8rdks\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.328986 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46a812c0-eacc-43bd-822e-95d43a8882e3-secret-volume\") pod \"collect-profiles-29517975-8rdks\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.336305 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swn6m\" (UniqueName: \"kubernetes.io/projected/46a812c0-eacc-43bd-822e-95d43a8882e3-kube-api-access-swn6m\") pod \"collect-profiles-29517975-8rdks\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:00 crc kubenswrapper[4750]: I0214 14:15:00.511741 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:01 crc kubenswrapper[4750]: I0214 14:15:01.007779 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks"] Feb 14 14:15:01 crc kubenswrapper[4750]: W0214 14:15:01.018647 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a812c0_eacc_43bd_822e_95d43a8882e3.slice/crio-ea6400bfbe2cada9dc243919389d4d456ae4368d094921d1f486693f14461afe WatchSource:0}: Error finding container ea6400bfbe2cada9dc243919389d4d456ae4368d094921d1f486693f14461afe: Status 404 returned error can't find the container with id ea6400bfbe2cada9dc243919389d4d456ae4368d094921d1f486693f14461afe Feb 14 14:15:01 crc kubenswrapper[4750]: I0214 14:15:01.231051 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" event={"ID":"46a812c0-eacc-43bd-822e-95d43a8882e3","Type":"ContainerStarted","Data":"ea6400bfbe2cada9dc243919389d4d456ae4368d094921d1f486693f14461afe"} Feb 14 14:15:03 crc kubenswrapper[4750]: I0214 14:15:03.899380 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.009299 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-w759g"] Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.019646 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" podUID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" containerName="dnsmasq-dns" containerID="cri-o://e5c65e47b3f367863d5a39211f94bc71aad771eb5b021b1a8e6c7341d048506b" gracePeriod=10 Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.096605 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" podUID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.196963 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.197041 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.264900 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.280969 4750 generic.go:334] "Generic (PLEG): container finished" podID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" containerID="e5c65e47b3f367863d5a39211f94bc71aad771eb5b021b1a8e6c7341d048506b" exitCode=0 Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.281089 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" event={"ID":"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1","Type":"ContainerDied","Data":"e5c65e47b3f367863d5a39211f94bc71aad771eb5b021b1a8e6c7341d048506b"} Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.283609 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" event={"ID":"46a812c0-eacc-43bd-822e-95d43a8882e3","Type":"ContainerStarted","Data":"0a7d7933a6ea3c604b242865bfbd5886145889b28c05ee044a6b885770a1ed1d"} Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.283925 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.290051 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 14:15:04 crc kubenswrapper[4750]: I0214 14:15:04.322197 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" podStartSLOduration=4.322167032 podStartE2EDuration="4.322167032s" podCreationTimestamp="2026-02-14 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:04.304819238 +0000 UTC m=+1376.330808719" watchObservedRunningTime="2026-02-14 14:15:04.322167032 +0000 UTC m=+1376.348156513" Feb 14 14:15:05 crc kubenswrapper[4750]: I0214 14:15:05.297606 4750 generic.go:334] "Generic (PLEG): container finished" podID="46a812c0-eacc-43bd-822e-95d43a8882e3" containerID="0a7d7933a6ea3c604b242865bfbd5886145889b28c05ee044a6b885770a1ed1d" exitCode=0 Feb 14 14:15:05 crc kubenswrapper[4750]: I0214 14:15:05.297674 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" event={"ID":"46a812c0-eacc-43bd-822e-95d43a8882e3","Type":"ContainerDied","Data":"0a7d7933a6ea3c604b242865bfbd5886145889b28c05ee044a6b885770a1ed1d"} Feb 14 14:15:05 crc kubenswrapper[4750]: I0214 14:15:05.302551 4750 generic.go:334] "Generic (PLEG): container finished" podID="9237972d-c2ac-4ee2-a178-6a750be9c50a" containerID="33bd9c23feb84763ec68aff067af462814774d271bb4c3e2ac17fd3ba5167564" exitCode=0 Feb 14 14:15:05 crc kubenswrapper[4750]: I0214 14:15:05.302639 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9pkr" event={"ID":"9237972d-c2ac-4ee2-a178-6a750be9c50a","Type":"ContainerDied","Data":"33bd9c23feb84763ec68aff067af462814774d271bb4c3e2ac17fd3ba5167564"} Feb 14 14:15:05 crc kubenswrapper[4750]: I0214 14:15:05.302940 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 14:15:06 crc kubenswrapper[4750]: I0214 14:15:06.340632 4750 generic.go:334] "Generic (PLEG): container finished" podID="3bbc77b3-5108-40f8-b057-18e305e8f8ba" containerID="bfd06cca322d29109d5521ce37946eca2879e4407a54df6bb2f3dc3e49fc1cc7" exitCode=0 Feb 14 14:15:06 crc kubenswrapper[4750]: I0214 14:15:06.340928 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rrlxq" event={"ID":"3bbc77b3-5108-40f8-b057-18e305e8f8ba","Type":"ContainerDied","Data":"bfd06cca322d29109d5521ce37946eca2879e4407a54df6bb2f3dc3e49fc1cc7"} Feb 14 14:15:06 crc kubenswrapper[4750]: I0214 14:15:06.341099 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 14:15:07 crc kubenswrapper[4750]: I0214 14:15:07.149412 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:07 crc kubenswrapper[4750]: I0214 14:15:07.149703 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:07 crc kubenswrapper[4750]: I0214 14:15:07.149714 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:07 crc kubenswrapper[4750]: I0214 14:15:07.149723 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:07 crc kubenswrapper[4750]: I0214 14:15:07.187830 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:07 crc kubenswrapper[4750]: I0214 14:15:07.199041 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:07 crc kubenswrapper[4750]: I0214 14:15:07.996348 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:07 crc kubenswrapper[4750]: I0214 14:15:07.999234 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.014915 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rrlxq" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.150028 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swn6m\" (UniqueName: \"kubernetes.io/projected/46a812c0-eacc-43bd-822e-95d43a8882e3-kube-api-access-swn6m\") pod \"46a812c0-eacc-43bd-822e-95d43a8882e3\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151167 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq7m9\" (UniqueName: \"kubernetes.io/projected/3bbc77b3-5108-40f8-b057-18e305e8f8ba-kube-api-access-jq7m9\") pod \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151211 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7cj9\" (UniqueName: \"kubernetes.io/projected/9237972d-c2ac-4ee2-a178-6a750be9c50a-kube-api-access-f7cj9\") pod \"9237972d-c2ac-4ee2-a178-6a750be9c50a\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151244 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-scripts\") pod \"9237972d-c2ac-4ee2-a178-6a750be9c50a\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151321 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-scripts\") pod \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151388 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-config-data\") pod \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151436 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46a812c0-eacc-43bd-822e-95d43a8882e3-secret-volume\") pod \"46a812c0-eacc-43bd-822e-95d43a8882e3\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151468 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-combined-ca-bundle\") pod \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151509 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-config-data\") pod \"9237972d-c2ac-4ee2-a178-6a750be9c50a\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151721 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-credential-keys\") pod \"9237972d-c2ac-4ee2-a178-6a750be9c50a\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151773 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-combined-ca-bundle\") pod \"9237972d-c2ac-4ee2-a178-6a750be9c50a\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151868 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-fernet-keys\") pod \"9237972d-c2ac-4ee2-a178-6a750be9c50a\" (UID: \"9237972d-c2ac-4ee2-a178-6a750be9c50a\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151913 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46a812c0-eacc-43bd-822e-95d43a8882e3-config-volume\") pod \"46a812c0-eacc-43bd-822e-95d43a8882e3\" (UID: \"46a812c0-eacc-43bd-822e-95d43a8882e3\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.151967 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbc77b3-5108-40f8-b057-18e305e8f8ba-logs\") pod \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\" (UID: \"3bbc77b3-5108-40f8-b057-18e305e8f8ba\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.153201 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bbc77b3-5108-40f8-b057-18e305e8f8ba-logs" (OuterVolumeSpecName: "logs") pod "3bbc77b3-5108-40f8-b057-18e305e8f8ba" (UID: "3bbc77b3-5108-40f8-b057-18e305e8f8ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.157942 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a812c0-eacc-43bd-822e-95d43a8882e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "46a812c0-eacc-43bd-822e-95d43a8882e3" (UID: "46a812c0-eacc-43bd-822e-95d43a8882e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.175074 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bbc77b3-5108-40f8-b057-18e305e8f8ba-kube-api-access-jq7m9" (OuterVolumeSpecName: "kube-api-access-jq7m9") pod "3bbc77b3-5108-40f8-b057-18e305e8f8ba" (UID: "3bbc77b3-5108-40f8-b057-18e305e8f8ba"). InnerVolumeSpecName "kube-api-access-jq7m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.175542 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-scripts" (OuterVolumeSpecName: "scripts") pod "3bbc77b3-5108-40f8-b057-18e305e8f8ba" (UID: "3bbc77b3-5108-40f8-b057-18e305e8f8ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.177306 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9237972d-c2ac-4ee2-a178-6a750be9c50a" (UID: "9237972d-c2ac-4ee2-a178-6a750be9c50a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.178155 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a812c0-eacc-43bd-822e-95d43a8882e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46a812c0-eacc-43bd-822e-95d43a8882e3" (UID: "46a812c0-eacc-43bd-822e-95d43a8882e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.205389 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a812c0-eacc-43bd-822e-95d43a8882e3-kube-api-access-swn6m" (OuterVolumeSpecName: "kube-api-access-swn6m") pod "46a812c0-eacc-43bd-822e-95d43a8882e3" (UID: "46a812c0-eacc-43bd-822e-95d43a8882e3"). InnerVolumeSpecName "kube-api-access-swn6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.205990 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9237972d-c2ac-4ee2-a178-6a750be9c50a" (UID: "9237972d-c2ac-4ee2-a178-6a750be9c50a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.207328 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9237972d-c2ac-4ee2-a178-6a750be9c50a-kube-api-access-f7cj9" (OuterVolumeSpecName: "kube-api-access-f7cj9") pod "9237972d-c2ac-4ee2-a178-6a750be9c50a" (UID: "9237972d-c2ac-4ee2-a178-6a750be9c50a"). InnerVolumeSpecName "kube-api-access-f7cj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.218106 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-scripts" (OuterVolumeSpecName: "scripts") pod "9237972d-c2ac-4ee2-a178-6a750be9c50a" (UID: "9237972d-c2ac-4ee2-a178-6a750be9c50a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.248898 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-config-data" (OuterVolumeSpecName: "config-data") pod "9237972d-c2ac-4ee2-a178-6a750be9c50a" (UID: "9237972d-c2ac-4ee2-a178-6a750be9c50a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255066 4750 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255100 4750 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255110 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46a812c0-eacc-43bd-822e-95d43a8882e3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255130 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbc77b3-5108-40f8-b057-18e305e8f8ba-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255139 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swn6m\" (UniqueName: \"kubernetes.io/projected/46a812c0-eacc-43bd-822e-95d43a8882e3-kube-api-access-swn6m\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255149 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq7m9\" (UniqueName: \"kubernetes.io/projected/3bbc77b3-5108-40f8-b057-18e305e8f8ba-kube-api-access-jq7m9\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255158 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7cj9\" (UniqueName: \"kubernetes.io/projected/9237972d-c2ac-4ee2-a178-6a750be9c50a-kube-api-access-f7cj9\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255165 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255173 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255182 4750 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46a812c0-eacc-43bd-822e-95d43a8882e3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255190 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.255596 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bbc77b3-5108-40f8-b057-18e305e8f8ba" (UID: "3bbc77b3-5108-40f8-b057-18e305e8f8ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.258246 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-config-data" (OuterVolumeSpecName: "config-data") pod "3bbc77b3-5108-40f8-b057-18e305e8f8ba" (UID: "3bbc77b3-5108-40f8-b057-18e305e8f8ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.278334 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9237972d-c2ac-4ee2-a178-6a750be9c50a" (UID: "9237972d-c2ac-4ee2-a178-6a750be9c50a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.319911 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.355911 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-config\") pod \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.356054 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-sb\") pod \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.356227 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqps9\" (UniqueName: \"kubernetes.io/projected/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-kube-api-access-xqps9\") pod \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.356298 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-svc\") pod \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.356331 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-nb\") pod \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.356348 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-swift-storage-0\") pod \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\" (UID: \"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1\") " Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.356880 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.356899 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbc77b3-5108-40f8-b057-18e305e8f8ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.356912 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9237972d-c2ac-4ee2-a178-6a750be9c50a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.370291 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-kube-api-access-xqps9" (OuterVolumeSpecName: "kube-api-access-xqps9") pod "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" (UID: "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1"). InnerVolumeSpecName "kube-api-access-xqps9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.409559 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" event={"ID":"46a812c0-eacc-43bd-822e-95d43a8882e3","Type":"ContainerDied","Data":"ea6400bfbe2cada9dc243919389d4d456ae4368d094921d1f486693f14461afe"} Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.409593 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea6400bfbe2cada9dc243919389d4d456ae4368d094921d1f486693f14461afe" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.409576 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.413900 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9pkr" event={"ID":"9237972d-c2ac-4ee2-a178-6a750be9c50a","Type":"ContainerDied","Data":"d8fe7dac379133b8a2071446ca0a15c77ec76a80662285b8dc7e15ed49089ec4"} Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.413991 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8fe7dac379133b8a2071446ca0a15c77ec76a80662285b8dc7e15ed49089ec4" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.414204 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9pkr" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.431742 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" event={"ID":"aff97ea1-695c-4c0a-b88b-8707fa7a6ed1","Type":"ContainerDied","Data":"8883661350f6f277c4d61705bf1dc53bcb54149014b3af84f9fad77c0195a330"} Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.431803 4750 scope.go:117] "RemoveContainer" containerID="e5c65e47b3f367863d5a39211f94bc71aad771eb5b021b1a8e6c7341d048506b" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.432106 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-w759g" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.438425 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rrlxq" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.439009 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rrlxq" event={"ID":"3bbc77b3-5108-40f8-b057-18e305e8f8ba","Type":"ContainerDied","Data":"3e140fb5ac35c5abaf9cabfd3d06bb855d9c8d87948342ee571efeb77fac0e39"} Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.439047 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e140fb5ac35c5abaf9cabfd3d06bb855d9c8d87948342ee571efeb77fac0e39" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.463774 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqps9\" (UniqueName: \"kubernetes.io/projected/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-kube-api-access-xqps9\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.492943 4750 scope.go:117] "RemoveContainer" containerID="4987f543ecd138f730848ca11e139742a3225227593d1c7c826362fef087483b" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.498446 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f9f7d688-glpvb"] Feb 14 14:15:08 crc kubenswrapper[4750]: E0214 14:15:08.498988 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a812c0-eacc-43bd-822e-95d43a8882e3" containerName="collect-profiles" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.499005 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a812c0-eacc-43bd-822e-95d43a8882e3" containerName="collect-profiles" Feb 14 14:15:08 crc kubenswrapper[4750]: E0214 14:15:08.499019 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" containerName="dnsmasq-dns" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.499025 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" containerName="dnsmasq-dns" Feb 14 14:15:08 crc kubenswrapper[4750]: E0214 14:15:08.499038 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" containerName="init" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.499044 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" containerName="init" Feb 14 14:15:08 crc kubenswrapper[4750]: E0214 14:15:08.499056 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9237972d-c2ac-4ee2-a178-6a750be9c50a" containerName="keystone-bootstrap" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.499064 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9237972d-c2ac-4ee2-a178-6a750be9c50a" containerName="keystone-bootstrap" Feb 14 14:15:08 crc kubenswrapper[4750]: E0214 14:15:08.499082 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bbc77b3-5108-40f8-b057-18e305e8f8ba" containerName="placement-db-sync" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.499088 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bbc77b3-5108-40f8-b057-18e305e8f8ba" containerName="placement-db-sync" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.499313 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" containerName="dnsmasq-dns" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.499341 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bbc77b3-5108-40f8-b057-18e305e8f8ba" containerName="placement-db-sync" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.499358 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a812c0-eacc-43bd-822e-95d43a8882e3" containerName="collect-profiles" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.499375 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="9237972d-c2ac-4ee2-a178-6a750be9c50a" containerName="keystone-bootstrap" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.500847 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.504728 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.504983 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.505173 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.505379 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.505506 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wqpzp" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.570903 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f9f7d688-glpvb"] Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.670335 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-logs\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.670409 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-public-tls-certs\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.670480 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-combined-ca-bundle\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.670511 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-internal-tls-certs\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.670547 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-config-data\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.670586 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqlxn\" (UniqueName: \"kubernetes.io/projected/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-kube-api-access-hqlxn\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.670608 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-scripts\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.744801 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" (UID: "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.771565 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-config-data\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.771620 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqlxn\" (UniqueName: \"kubernetes.io/projected/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-kube-api-access-hqlxn\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.771642 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-scripts\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.771685 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-logs\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.771726 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-public-tls-certs\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.771786 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-combined-ca-bundle\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.771817 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-internal-tls-certs\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.771878 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.772255 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-logs\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.775548 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.775722 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.776197 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.787373 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.788019 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-combined-ca-bundle\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.800306 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-config-data\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.800375 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-public-tls-certs\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.809647 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-scripts\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.810594 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-internal-tls-certs\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.812076 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqlxn\" (UniqueName: \"kubernetes.io/projected/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-kube-api-access-hqlxn\") pod \"placement-6f9f7d688-glpvb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.837537 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wqpzp" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.845178 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.950446 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-config" (OuterVolumeSpecName: "config") pod "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" (UID: "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.955603 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" (UID: "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.961544 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" (UID: "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.962583 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" (UID: "aff97ea1-695c-4c0a-b88b-8707fa7a6ed1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.978165 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.978191 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.978202 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:08 crc kubenswrapper[4750]: I0214 14:15:08.978212 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.033922 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.034041 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.036790 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.185760 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-w759g"] Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.213173 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-w759g"] Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.228009 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7bb6cf9d49-hj8cz"] Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.229741 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.234347 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vtfpt" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.234672 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.234839 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.235045 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.235207 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.235321 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.238476 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bb6cf9d49-hj8cz"] Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.287521 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-scripts\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.287597 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-public-tls-certs\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.287639 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-internal-tls-certs\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.287668 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hh9s\" (UniqueName: \"kubernetes.io/projected/42f54e99-7974-44a5-9796-5ab9a50db818-kube-api-access-5hh9s\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.287685 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-credential-keys\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.287711 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-combined-ca-bundle\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.287753 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-config-data\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.287822 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-fernet-keys\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.390038 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-fernet-keys\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.390170 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-scripts\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.390260 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-public-tls-certs\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.390335 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-internal-tls-certs\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.390364 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hh9s\" (UniqueName: \"kubernetes.io/projected/42f54e99-7974-44a5-9796-5ab9a50db818-kube-api-access-5hh9s\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.390388 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-credential-keys\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.390434 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-combined-ca-bundle\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.390694 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-config-data\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.396820 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-config-data\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.400866 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-fernet-keys\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.401075 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-combined-ca-bundle\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.401453 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-credential-keys\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.402048 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-internal-tls-certs\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.402655 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-public-tls-certs\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.419291 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f54e99-7974-44a5-9796-5ab9a50db818-scripts\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.421770 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hh9s\" (UniqueName: \"kubernetes.io/projected/42f54e99-7974-44a5-9796-5ab9a50db818-kube-api-access-5hh9s\") pod \"keystone-7bb6cf9d49-hj8cz\" (UID: \"42f54e99-7974-44a5-9796-5ab9a50db818\") " pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.438421 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f9f7d688-glpvb"] Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.470351 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6b4rh" event={"ID":"1d67ec74-2a65-494e-a768-5cc6e6714e49","Type":"ContainerStarted","Data":"a57c412bd66e48323dca61d1d0d8e9450afe348ca7ea6b4b6efe0b1f515b48e0"} Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.491471 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gdzkt" event={"ID":"2b46a12b-a34f-4850-a1b8-a764ba798764","Type":"ContainerStarted","Data":"7f5918ed050a13023e1bd7a13e78da8692eb0e13a13e35789d7defca00b9e0ef"} Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.492239 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6b4rh" podStartSLOduration=3.720439672 podStartE2EDuration="46.49221589s" podCreationTimestamp="2026-02-14 14:14:23 +0000 UTC" firstStartedPulling="2026-02-14 14:14:25.476529095 +0000 UTC m=+1337.502518566" lastFinishedPulling="2026-02-14 14:15:08.248305303 +0000 UTC m=+1380.274294784" observedRunningTime="2026-02-14 14:15:09.489492216 +0000 UTC m=+1381.515481707" watchObservedRunningTime="2026-02-14 14:15:09.49221589 +0000 UTC m=+1381.518205381" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.496337 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f9f7d688-glpvb" event={"ID":"dbac059a-8e0e-46a2-857c-b2515fa2a8eb","Type":"ContainerStarted","Data":"7bd2f16b4e1c25ec72847e89a3b3801fa2ec078cce736178bc3a0bb717dc8419"} Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.508279 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af719ce6-033d-4e45-8502-cc8ee8f091c0","Type":"ContainerStarted","Data":"b7477af210cc5e31950b608be65e1fb707ce2c86b4963d1893068a9783ad71d9"} Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.552849 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-gdzkt" podStartSLOduration=3.6566405189999998 podStartE2EDuration="46.552829026s" podCreationTimestamp="2026-02-14 14:14:23 +0000 UTC" firstStartedPulling="2026-02-14 14:14:25.098026606 +0000 UTC m=+1337.124016087" lastFinishedPulling="2026-02-14 14:15:07.994215113 +0000 UTC m=+1380.020204594" observedRunningTime="2026-02-14 14:15:09.518877208 +0000 UTC m=+1381.544866689" watchObservedRunningTime="2026-02-14 14:15:09.552829026 +0000 UTC m=+1381.578818507" Feb 14 14:15:09 crc kubenswrapper[4750]: I0214 14:15:09.576864 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.164613 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.165137 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.272231 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bb6cf9d49-hj8cz"] Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.554888 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f9f7d688-glpvb" event={"ID":"dbac059a-8e0e-46a2-857c-b2515fa2a8eb","Type":"ContainerStarted","Data":"59844409a78c8cef20e9660672e95c489ca8fbf8c1d0d56503a178e40b5ae2ee"} Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.555192 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f9f7d688-glpvb" event={"ID":"dbac059a-8e0e-46a2-857c-b2515fa2a8eb","Type":"ContainerStarted","Data":"af54dc8bb6b112162d2bc32bd882c1f654dc53ba10caa3ffc47d38a0064e71bb"} Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.556614 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.556644 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.561936 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bb6cf9d49-hj8cz" event={"ID":"42f54e99-7974-44a5-9796-5ab9a50db818","Type":"ContainerStarted","Data":"6923c29db69206cd8786727e26889145ef1377aa0f52ecdf33e110fa5658d2f7"} Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.588428 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.602704 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f9f7d688-glpvb" podStartSLOduration=2.602686692 podStartE2EDuration="2.602686692s" podCreationTimestamp="2026-02-14 14:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:10.588354591 +0000 UTC m=+1382.614344072" watchObservedRunningTime="2026-02-14 14:15:10.602686692 +0000 UTC m=+1382.628676173" Feb 14 14:15:10 crc kubenswrapper[4750]: I0214 14:15:10.763350 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff97ea1-695c-4c0a-b88b-8707fa7a6ed1" path="/var/lib/kubelet/pods/aff97ea1-695c-4c0a-b88b-8707fa7a6ed1/volumes" Feb 14 14:15:11 crc kubenswrapper[4750]: I0214 14:15:11.575263 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bb6cf9d49-hj8cz" event={"ID":"42f54e99-7974-44a5-9796-5ab9a50db818","Type":"ContainerStarted","Data":"f0a2911ee9eff80d7a4540733a34c51e4ca339ec59d6bb68fe245d43a9c2a05b"} Feb 14 14:15:11 crc kubenswrapper[4750]: I0214 14:15:11.575778 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:11 crc kubenswrapper[4750]: I0214 14:15:11.579922 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n6jc5" event={"ID":"6c96aa35-ddbc-4485-ac13-a2f08de1dd28","Type":"ContainerStarted","Data":"f6014677be4fabec48663f7d6b80b860121f24b4645be3f8d4726c154fa54545"} Feb 14 14:15:11 crc kubenswrapper[4750]: I0214 14:15:11.599598 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7bb6cf9d49-hj8cz" podStartSLOduration=2.599579793 podStartE2EDuration="2.599579793s" podCreationTimestamp="2026-02-14 14:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:11.597313241 +0000 UTC m=+1383.623302722" watchObservedRunningTime="2026-02-14 14:15:11.599579793 +0000 UTC m=+1383.625569274" Feb 14 14:15:11 crc kubenswrapper[4750]: I0214 14:15:11.631923 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-n6jc5" podStartSLOduration=4.3438346899999996 podStartE2EDuration="48.631900955s" podCreationTimestamp="2026-02-14 14:14:23 +0000 UTC" firstStartedPulling="2026-02-14 14:14:25.506930785 +0000 UTC m=+1337.532920266" lastFinishedPulling="2026-02-14 14:15:09.79499705 +0000 UTC m=+1381.820986531" observedRunningTime="2026-02-14 14:15:11.621869171 +0000 UTC m=+1383.647858652" watchObservedRunningTime="2026-02-14 14:15:11.631900955 +0000 UTC m=+1383.657890436" Feb 14 14:15:14 crc kubenswrapper[4750]: I0214 14:15:14.614991 4750 generic.go:334] "Generic (PLEG): container finished" podID="1d67ec74-2a65-494e-a768-5cc6e6714e49" containerID="a57c412bd66e48323dca61d1d0d8e9450afe348ca7ea6b4b6efe0b1f515b48e0" exitCode=0 Feb 14 14:15:14 crc kubenswrapper[4750]: I0214 14:15:14.615548 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6b4rh" event={"ID":"1d67ec74-2a65-494e-a768-5cc6e6714e49","Type":"ContainerDied","Data":"a57c412bd66e48323dca61d1d0d8e9450afe348ca7ea6b4b6efe0b1f515b48e0"} Feb 14 14:15:15 crc kubenswrapper[4750]: I0214 14:15:15.628522 4750 generic.go:334] "Generic (PLEG): container finished" podID="2b46a12b-a34f-4850-a1b8-a764ba798764" containerID="7f5918ed050a13023e1bd7a13e78da8692eb0e13a13e35789d7defca00b9e0ef" exitCode=0 Feb 14 14:15:15 crc kubenswrapper[4750]: I0214 14:15:15.628733 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gdzkt" event={"ID":"2b46a12b-a34f-4850-a1b8-a764ba798764","Type":"ContainerDied","Data":"7f5918ed050a13023e1bd7a13e78da8692eb0e13a13e35789d7defca00b9e0ef"} Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.639639 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6b4rh" event={"ID":"1d67ec74-2a65-494e-a768-5cc6e6714e49","Type":"ContainerDied","Data":"e5d8c389a111e4edd18166f3a86c851fffbfcdf5d8323dd8ebda278fb8edff18"} Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.640089 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d8c389a111e4edd18166f3a86c851fffbfcdf5d8323dd8ebda278fb8edff18" Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.642158 4750 generic.go:334] "Generic (PLEG): container finished" podID="6c96aa35-ddbc-4485-ac13-a2f08de1dd28" containerID="f6014677be4fabec48663f7d6b80b860121f24b4645be3f8d4726c154fa54545" exitCode=0 Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.642335 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n6jc5" event={"ID":"6c96aa35-ddbc-4485-ac13-a2f08de1dd28","Type":"ContainerDied","Data":"f6014677be4fabec48663f7d6b80b860121f24b4645be3f8d4726c154fa54545"} Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.720367 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.770301 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-combined-ca-bundle\") pod \"1d67ec74-2a65-494e-a768-5cc6e6714e49\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.770363 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-db-sync-config-data\") pod \"1d67ec74-2a65-494e-a768-5cc6e6714e49\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.770584 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtll7\" (UniqueName: \"kubernetes.io/projected/1d67ec74-2a65-494e-a768-5cc6e6714e49-kube-api-access-gtll7\") pod \"1d67ec74-2a65-494e-a768-5cc6e6714e49\" (UID: \"1d67ec74-2a65-494e-a768-5cc6e6714e49\") " Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.776463 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1d67ec74-2a65-494e-a768-5cc6e6714e49" (UID: "1d67ec74-2a65-494e-a768-5cc6e6714e49"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.776568 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d67ec74-2a65-494e-a768-5cc6e6714e49-kube-api-access-gtll7" (OuterVolumeSpecName: "kube-api-access-gtll7") pod "1d67ec74-2a65-494e-a768-5cc6e6714e49" (UID: "1d67ec74-2a65-494e-a768-5cc6e6714e49"). InnerVolumeSpecName "kube-api-access-gtll7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.816575 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d67ec74-2a65-494e-a768-5cc6e6714e49" (UID: "1d67ec74-2a65-494e-a768-5cc6e6714e49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.875710 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtll7\" (UniqueName: \"kubernetes.io/projected/1d67ec74-2a65-494e-a768-5cc6e6714e49-kube-api-access-gtll7\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.875743 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:16 crc kubenswrapper[4750]: I0214 14:15:16.875753 4750 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d67ec74-2a65-494e-a768-5cc6e6714e49-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.535262 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gdzkt" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.592265 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-config-data\") pod \"2b46a12b-a34f-4850-a1b8-a764ba798764\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.592353 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-combined-ca-bundle\") pod \"2b46a12b-a34f-4850-a1b8-a764ba798764\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.592465 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btvcd\" (UniqueName: \"kubernetes.io/projected/2b46a12b-a34f-4850-a1b8-a764ba798764-kube-api-access-btvcd\") pod \"2b46a12b-a34f-4850-a1b8-a764ba798764\" (UID: \"2b46a12b-a34f-4850-a1b8-a764ba798764\") " Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.610715 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b46a12b-a34f-4850-a1b8-a764ba798764-kube-api-access-btvcd" (OuterVolumeSpecName: "kube-api-access-btvcd") pod "2b46a12b-a34f-4850-a1b8-a764ba798764" (UID: "2b46a12b-a34f-4850-a1b8-a764ba798764"). InnerVolumeSpecName "kube-api-access-btvcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.638133 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b46a12b-a34f-4850-a1b8-a764ba798764" (UID: "2b46a12b-a34f-4850-a1b8-a764ba798764"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.652362 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gdzkt" event={"ID":"2b46a12b-a34f-4850-a1b8-a764ba798764","Type":"ContainerDied","Data":"040613a512a4e850ce0ee9bd84484cdcfb9de8110bff6a92de8f60c887d9efdd"} Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.652413 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="040613a512a4e850ce0ee9bd84484cdcfb9de8110bff6a92de8f60c887d9efdd" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.652515 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gdzkt" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.652526 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6b4rh" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.688380 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-config-data" (OuterVolumeSpecName: "config-data") pod "2b46a12b-a34f-4850-a1b8-a764ba798764" (UID: "2b46a12b-a34f-4850-a1b8-a764ba798764"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.698105 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.698194 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b46a12b-a34f-4850-a1b8-a764ba798764-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.698222 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btvcd\" (UniqueName: \"kubernetes.io/projected/2b46a12b-a34f-4850-a1b8-a764ba798764-kube-api-access-btvcd\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.957829 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-77f9f44bff-kmkdt"] Feb 14 14:15:17 crc kubenswrapper[4750]: E0214 14:15:17.958529 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b46a12b-a34f-4850-a1b8-a764ba798764" containerName="heat-db-sync" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.958551 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b46a12b-a34f-4850-a1b8-a764ba798764" containerName="heat-db-sync" Feb 14 14:15:17 crc kubenswrapper[4750]: E0214 14:15:17.958567 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d67ec74-2a65-494e-a768-5cc6e6714e49" containerName="barbican-db-sync" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.958574 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d67ec74-2a65-494e-a768-5cc6e6714e49" containerName="barbican-db-sync" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.958797 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d67ec74-2a65-494e-a768-5cc6e6714e49" containerName="barbican-db-sync" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.958820 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b46a12b-a34f-4850-a1b8-a764ba798764" containerName="heat-db-sync" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.960017 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.974695 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.974809 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-flpd6" Feb 14 14:15:17 crc kubenswrapper[4750]: I0214 14:15:17.975107 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.005698 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b85269-938a-4bc4-8321-f11d72214b39-combined-ca-bundle\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.005730 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b85269-938a-4bc4-8321-f11d72214b39-config-data\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.005754 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgtm\" (UniqueName: \"kubernetes.io/projected/35b85269-938a-4bc4-8321-f11d72214b39-kube-api-access-cqgtm\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.005807 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35b85269-938a-4bc4-8321-f11d72214b39-config-data-custom\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.005846 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b85269-938a-4bc4-8321-f11d72214b39-logs\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.026018 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-fdccf6dd6-ftmgl"] Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.027918 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.034836 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.090543 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77f9f44bff-kmkdt"] Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.124167 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fdccf6dd6-ftmgl"] Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.124485 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8wl\" (UniqueName: \"kubernetes.io/projected/cd37e6f7-6e18-4587-8237-234b4d5cf12a-kube-api-access-fz8wl\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.124519 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd37e6f7-6e18-4587-8237-234b4d5cf12a-config-data\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.124578 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35b85269-938a-4bc4-8321-f11d72214b39-config-data-custom\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.124680 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b85269-938a-4bc4-8321-f11d72214b39-logs\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.124723 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd37e6f7-6e18-4587-8237-234b4d5cf12a-combined-ca-bundle\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.124822 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd37e6f7-6e18-4587-8237-234b4d5cf12a-logs\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.124881 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd37e6f7-6e18-4587-8237-234b4d5cf12a-config-data-custom\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.125056 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b85269-938a-4bc4-8321-f11d72214b39-combined-ca-bundle\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.125071 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b85269-938a-4bc4-8321-f11d72214b39-config-data\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.125103 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqgtm\" (UniqueName: \"kubernetes.io/projected/35b85269-938a-4bc4-8321-f11d72214b39-kube-api-access-cqgtm\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.132072 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b85269-938a-4bc4-8321-f11d72214b39-logs\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.150361 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35b85269-938a-4bc4-8321-f11d72214b39-config-data-custom\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.152686 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b85269-938a-4bc4-8321-f11d72214b39-combined-ca-bundle\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.177702 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqgtm\" (UniqueName: \"kubernetes.io/projected/35b85269-938a-4bc4-8321-f11d72214b39-kube-api-access-cqgtm\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.194510 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b85269-938a-4bc4-8321-f11d72214b39-config-data\") pod \"barbican-worker-77f9f44bff-kmkdt\" (UID: \"35b85269-938a-4bc4-8321-f11d72214b39\") " pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.235156 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bfp5k"] Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.236929 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.238448 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd37e6f7-6e18-4587-8237-234b4d5cf12a-logs\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.238512 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd37e6f7-6e18-4587-8237-234b4d5cf12a-config-data-custom\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.238621 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz8wl\" (UniqueName: \"kubernetes.io/projected/cd37e6f7-6e18-4587-8237-234b4d5cf12a-kube-api-access-fz8wl\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.238641 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd37e6f7-6e18-4587-8237-234b4d5cf12a-config-data\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.238698 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd37e6f7-6e18-4587-8237-234b4d5cf12a-combined-ca-bundle\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.242898 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd37e6f7-6e18-4587-8237-234b4d5cf12a-logs\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.264997 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd37e6f7-6e18-4587-8237-234b4d5cf12a-combined-ca-bundle\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.266808 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd37e6f7-6e18-4587-8237-234b4d5cf12a-config-data-custom\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.267307 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd37e6f7-6e18-4587-8237-234b4d5cf12a-config-data\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.269783 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz8wl\" (UniqueName: \"kubernetes.io/projected/cd37e6f7-6e18-4587-8237-234b4d5cf12a-kube-api-access-fz8wl\") pod \"barbican-keystone-listener-fdccf6dd6-ftmgl\" (UID: \"cd37e6f7-6e18-4587-8237-234b4d5cf12a\") " pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.271913 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bfp5k"] Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.326416 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77f9f44bff-kmkdt" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.342591 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-svc\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.342640 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.342696 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zznc4\" (UniqueName: \"kubernetes.io/projected/d96f0e84-d627-4744-9bab-d6550d8bb7ee-kube-api-access-zznc4\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.342747 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-config\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.342816 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.342849 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.368804 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bdf8d8ffd-twqjc"] Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.370873 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.374987 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.380679 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.414835 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bdf8d8ffd-twqjc"] Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.444796 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zznc4\" (UniqueName: \"kubernetes.io/projected/d96f0e84-d627-4744-9bab-d6550d8bb7ee-kube-api-access-zznc4\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.444900 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-config\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.444940 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-combined-ca-bundle\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.444991 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-logs\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.445025 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.445055 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2sr\" (UniqueName: \"kubernetes.io/projected/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-kube-api-access-2p2sr\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.445077 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.445106 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-svc\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.445151 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data-custom\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.445171 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.445218 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.446219 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.446257 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-svc\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.446428 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.446939 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-config\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.447043 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.472881 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zznc4\" (UniqueName: \"kubernetes.io/projected/d96f0e84-d627-4744-9bab-d6550d8bb7ee-kube-api-access-zznc4\") pod \"dnsmasq-dns-85ff748b95-bfp5k\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: E0214 14:15:18.478832 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b46a12b_a34f_4850_a1b8_a764ba798764.slice/crio-040613a512a4e850ce0ee9bd84484cdcfb9de8110bff6a92de8f60c887d9efdd\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b46a12b_a34f_4850_a1b8_a764ba798764.slice\": RecentStats: unable to find data in memory cache]" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.547069 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-combined-ca-bundle\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.547167 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-logs\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.547220 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p2sr\" (UniqueName: \"kubernetes.io/projected/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-kube-api-access-2p2sr\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.547269 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data-custom\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.547318 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.548400 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-logs\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.551456 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data-custom\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.552701 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-combined-ca-bundle\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.553736 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.565390 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p2sr\" (UniqueName: \"kubernetes.io/projected/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-kube-api-access-2p2sr\") pod \"barbican-api-6bdf8d8ffd-twqjc\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.642324 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:18 crc kubenswrapper[4750]: I0214 14:15:18.710744 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.016202 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.072858 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-config-data\") pod \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.072991 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xql4m\" (UniqueName: \"kubernetes.io/projected/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-kube-api-access-xql4m\") pod \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.073097 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-db-sync-config-data\") pod \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.073158 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-combined-ca-bundle\") pod \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.073268 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-etc-machine-id\") pod \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.073307 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-scripts\") pod \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\" (UID: \"6c96aa35-ddbc-4485-ac13-a2f08de1dd28\") " Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.073839 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6c96aa35-ddbc-4485-ac13-a2f08de1dd28" (UID: "6c96aa35-ddbc-4485-ac13-a2f08de1dd28"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.077862 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-kube-api-access-xql4m" (OuterVolumeSpecName: "kube-api-access-xql4m") pod "6c96aa35-ddbc-4485-ac13-a2f08de1dd28" (UID: "6c96aa35-ddbc-4485-ac13-a2f08de1dd28"). InnerVolumeSpecName "kube-api-access-xql4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.080534 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-scripts" (OuterVolumeSpecName: "scripts") pod "6c96aa35-ddbc-4485-ac13-a2f08de1dd28" (UID: "6c96aa35-ddbc-4485-ac13-a2f08de1dd28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.080713 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6c96aa35-ddbc-4485-ac13-a2f08de1dd28" (UID: "6c96aa35-ddbc-4485-ac13-a2f08de1dd28"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.119759 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c96aa35-ddbc-4485-ac13-a2f08de1dd28" (UID: "6c96aa35-ddbc-4485-ac13-a2f08de1dd28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.144449 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-config-data" (OuterVolumeSpecName: "config-data") pod "6c96aa35-ddbc-4485-ac13-a2f08de1dd28" (UID: "6c96aa35-ddbc-4485-ac13-a2f08de1dd28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.178485 4750 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.178515 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.178525 4750 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.178534 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.178542 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.178550 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xql4m\" (UniqueName: \"kubernetes.io/projected/6c96aa35-ddbc-4485-ac13-a2f08de1dd28-kube-api-access-xql4m\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:19 crc kubenswrapper[4750]: E0214 14:15:19.283368 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.340842 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77f9f44bff-kmkdt"] Feb 14 14:15:19 crc kubenswrapper[4750]: W0214 14:15:19.504930 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd96f0e84_d627_4744_9bab_d6550d8bb7ee.slice/crio-8b6a481515cf9b8964a2c4e4dba369b2c1bc084f6168140f85419b87e1b132ff WatchSource:0}: Error finding container 8b6a481515cf9b8964a2c4e4dba369b2c1bc084f6168140f85419b87e1b132ff: Status 404 returned error can't find the container with id 8b6a481515cf9b8964a2c4e4dba369b2c1bc084f6168140f85419b87e1b132ff Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.515086 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bfp5k"] Feb 14 14:15:19 crc kubenswrapper[4750]: W0214 14:15:19.629549 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bd925a1_0c7d_40c9_a3dc_33a987cc0a3f.slice/crio-bbae6a5bc9b1524c05bbbc9b85c64c2ab60ec59aa437e93fe9bf4c1a3db05266 WatchSource:0}: Error finding container bbae6a5bc9b1524c05bbbc9b85c64c2ab60ec59aa437e93fe9bf4c1a3db05266: Status 404 returned error can't find the container with id bbae6a5bc9b1524c05bbbc9b85c64c2ab60ec59aa437e93fe9bf4c1a3db05266 Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.630646 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bdf8d8ffd-twqjc"] Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.642530 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fdccf6dd6-ftmgl"] Feb 14 14:15:19 crc kubenswrapper[4750]: W0214 14:15:19.658929 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd37e6f7_6e18_4587_8237_234b4d5cf12a.slice/crio-f4374ee9bd8ae8a9f0ccef8696ded9522bf081ae1b059f7b14085038009cb937 WatchSource:0}: Error finding container f4374ee9bd8ae8a9f0ccef8696ded9522bf081ae1b059f7b14085038009cb937: Status 404 returned error can't find the container with id f4374ee9bd8ae8a9f0ccef8696ded9522bf081ae1b059f7b14085038009cb937 Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.714914 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af719ce6-033d-4e45-8502-cc8ee8f091c0","Type":"ContainerStarted","Data":"82f408c6f9667ea08ce381d8cc840dfcabf3f29eacc1cf9b81be9c7ab3244bab"} Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.715072 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="ceilometer-notification-agent" containerID="cri-o://0a293c585ee781e59bd34da2f1248f7689f07434f283f35826728f35e7bdbb80" gracePeriod=30 Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.715295 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.715575 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="proxy-httpd" containerID="cri-o://82f408c6f9667ea08ce381d8cc840dfcabf3f29eacc1cf9b81be9c7ab3244bab" gracePeriod=30 Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.715627 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="sg-core" containerID="cri-o://b7477af210cc5e31950b608be65e1fb707ce2c86b4963d1893068a9783ad71d9" gracePeriod=30 Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.724630 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77f9f44bff-kmkdt" event={"ID":"35b85269-938a-4bc4-8321-f11d72214b39","Type":"ContainerStarted","Data":"4a41bb9874e03f3c44a1e75ef09b4a352157226f4a07262e6da2aef80968460f"} Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.728988 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n6jc5" event={"ID":"6c96aa35-ddbc-4485-ac13-a2f08de1dd28","Type":"ContainerDied","Data":"7af9d38679fdcecfac2c5eb66bb11456b3c39f18157df642670ccfb33a1b955f"} Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.729341 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af9d38679fdcecfac2c5eb66bb11456b3c39f18157df642670ccfb33a1b955f" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.729219 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n6jc5" Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.731793 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" event={"ID":"d96f0e84-d627-4744-9bab-d6550d8bb7ee","Type":"ContainerStarted","Data":"8b6a481515cf9b8964a2c4e4dba369b2c1bc084f6168140f85419b87e1b132ff"} Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.739983 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" event={"ID":"cd37e6f7-6e18-4587-8237-234b4d5cf12a","Type":"ContainerStarted","Data":"f4374ee9bd8ae8a9f0ccef8696ded9522bf081ae1b059f7b14085038009cb937"} Feb 14 14:15:19 crc kubenswrapper[4750]: I0214 14:15:19.764374 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" event={"ID":"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f","Type":"ContainerStarted","Data":"bbae6a5bc9b1524c05bbbc9b85c64c2ab60ec59aa437e93fe9bf4c1a3db05266"} Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.356641 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bfp5k"] Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.382158 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 14:15:20 crc kubenswrapper[4750]: E0214 14:15:20.382661 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c96aa35-ddbc-4485-ac13-a2f08de1dd28" containerName="cinder-db-sync" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.382705 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c96aa35-ddbc-4485-ac13-a2f08de1dd28" containerName="cinder-db-sync" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.382923 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c96aa35-ddbc-4485-ac13-a2f08de1dd28" containerName="cinder-db-sync" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.384061 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.386427 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-thwkx" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.390291 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.390747 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.390873 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.404752 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.440177 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-c54ql"] Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.442486 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.465264 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-c54ql"] Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520192 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520365 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdkf\" (UniqueName: \"kubernetes.io/projected/a920ce34-52b6-45f6-ac27-3aa85015c234-kube-api-access-gxdkf\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520434 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520471 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520496 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520513 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520539 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520561 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520597 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520616 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520637 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bptk\" (UniqueName: \"kubernetes.io/projected/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-kube-api-access-9bptk\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.520670 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-config\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.622964 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623041 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxdkf\" (UniqueName: \"kubernetes.io/projected/a920ce34-52b6-45f6-ac27-3aa85015c234-kube-api-access-gxdkf\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623088 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623133 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623154 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623170 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623194 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623212 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623240 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623259 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623279 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bptk\" (UniqueName: \"kubernetes.io/projected/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-kube-api-access-9bptk\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.623305 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-config\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.624141 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.624184 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.624197 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.624184 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-config\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.624885 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.625353 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.633594 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.634554 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.635288 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.635434 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.635637 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.645226 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.650248 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.653653 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.684576 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxdkf\" (UniqueName: \"kubernetes.io/projected/a920ce34-52b6-45f6-ac27-3aa85015c234-kube-api-access-gxdkf\") pod \"dnsmasq-dns-5c9776ccc5-c54ql\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.689988 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bptk\" (UniqueName: \"kubernetes.io/projected/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-kube-api-access-9bptk\") pod \"cinder-scheduler-0\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.718532 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.729416 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-scripts\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.729466 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data-custom\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.729492 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08e0cb68-c112-4ee0-aa62-68937f86dc89-etc-machine-id\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.729536 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.729588 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbtkh\" (UniqueName: \"kubernetes.io/projected/08e0cb68-c112-4ee0-aa62-68937f86dc89-kube-api-access-sbtkh\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.729608 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.729686 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e0cb68-c112-4ee0-aa62-68937f86dc89-logs\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.768938 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.834407 4750 generic.go:334] "Generic (PLEG): container finished" podID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerID="82f408c6f9667ea08ce381d8cc840dfcabf3f29eacc1cf9b81be9c7ab3244bab" exitCode=0 Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.834456 4750 generic.go:334] "Generic (PLEG): container finished" podID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerID="b7477af210cc5e31950b608be65e1fb707ce2c86b4963d1893068a9783ad71d9" exitCode=2 Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.834544 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af719ce6-033d-4e45-8502-cc8ee8f091c0","Type":"ContainerDied","Data":"82f408c6f9667ea08ce381d8cc840dfcabf3f29eacc1cf9b81be9c7ab3244bab"} Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.834578 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af719ce6-033d-4e45-8502-cc8ee8f091c0","Type":"ContainerDied","Data":"b7477af210cc5e31950b608be65e1fb707ce2c86b4963d1893068a9783ad71d9"} Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.836631 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbtkh\" (UniqueName: \"kubernetes.io/projected/08e0cb68-c112-4ee0-aa62-68937f86dc89-kube-api-access-sbtkh\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.836745 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.837016 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e0cb68-c112-4ee0-aa62-68937f86dc89-logs\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.837871 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-scripts\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.837923 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data-custom\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.837978 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08e0cb68-c112-4ee0-aa62-68937f86dc89-etc-machine-id\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.838079 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.858561 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e0cb68-c112-4ee0-aa62-68937f86dc89-logs\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.859267 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08e0cb68-c112-4ee0-aa62-68937f86dc89-etc-machine-id\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.867480 4750 generic.go:334] "Generic (PLEG): container finished" podID="d96f0e84-d627-4744-9bab-d6550d8bb7ee" containerID="4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811" exitCode=0 Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.868295 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-scripts\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.870267 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" event={"ID":"d96f0e84-d627-4744-9bab-d6550d8bb7ee","Type":"ContainerDied","Data":"4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811"} Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.883892 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data-custom\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.894650 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbtkh\" (UniqueName: \"kubernetes.io/projected/08e0cb68-c112-4ee0-aa62-68937f86dc89-kube-api-access-sbtkh\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.895071 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.906105 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " pod="openstack/cinder-api-0" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.948961 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" event={"ID":"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f","Type":"ContainerStarted","Data":"14b64fe7b62b303c18c48742ac77d970a1b629c6bd4064ad73e6681d5f2f19bc"} Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.949007 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" event={"ID":"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f","Type":"ContainerStarted","Data":"7edfcc248a9f185b3b0c1d97c78cf29022a123fdb114d1d979c7169bcb9984b5"} Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.950252 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.950496 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:20 crc kubenswrapper[4750]: I0214 14:15:20.958065 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 14:15:21 crc kubenswrapper[4750]: I0214 14:15:21.004314 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" podStartSLOduration=3.00429928 podStartE2EDuration="3.00429928s" podCreationTimestamp="2026-02-14 14:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:21.003008144 +0000 UTC m=+1393.028997625" watchObservedRunningTime="2026-02-14 14:15:21.00429928 +0000 UTC m=+1393.030288751" Feb 14 14:15:21 crc kubenswrapper[4750]: I0214 14:15:21.961016 4750 generic.go:334] "Generic (PLEG): container finished" podID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerID="0a293c585ee781e59bd34da2f1248f7689f07434f283f35826728f35e7bdbb80" exitCode=0 Feb 14 14:15:21 crc kubenswrapper[4750]: I0214 14:15:21.961241 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af719ce6-033d-4e45-8502-cc8ee8f091c0","Type":"ContainerDied","Data":"0a293c585ee781e59bd34da2f1248f7689f07434f283f35826728f35e7bdbb80"} Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.376502 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.383488 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-sg-core-conf-yaml\") pod \"af719ce6-033d-4e45-8502-cc8ee8f091c0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.383769 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6dlw\" (UniqueName: \"kubernetes.io/projected/af719ce6-033d-4e45-8502-cc8ee8f091c0-kube-api-access-p6dlw\") pod \"af719ce6-033d-4e45-8502-cc8ee8f091c0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.383900 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-log-httpd\") pod \"af719ce6-033d-4e45-8502-cc8ee8f091c0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.383934 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-config-data\") pod \"af719ce6-033d-4e45-8502-cc8ee8f091c0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.383998 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-scripts\") pod \"af719ce6-033d-4e45-8502-cc8ee8f091c0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.384023 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-combined-ca-bundle\") pod \"af719ce6-033d-4e45-8502-cc8ee8f091c0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.384175 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-run-httpd\") pod \"af719ce6-033d-4e45-8502-cc8ee8f091c0\" (UID: \"af719ce6-033d-4e45-8502-cc8ee8f091c0\") " Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.385609 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "af719ce6-033d-4e45-8502-cc8ee8f091c0" (UID: "af719ce6-033d-4e45-8502-cc8ee8f091c0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.393030 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "af719ce6-033d-4e45-8502-cc8ee8f091c0" (UID: "af719ce6-033d-4e45-8502-cc8ee8f091c0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.399292 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-scripts" (OuterVolumeSpecName: "scripts") pod "af719ce6-033d-4e45-8502-cc8ee8f091c0" (UID: "af719ce6-033d-4e45-8502-cc8ee8f091c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.399365 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af719ce6-033d-4e45-8502-cc8ee8f091c0-kube-api-access-p6dlw" (OuterVolumeSpecName: "kube-api-access-p6dlw") pod "af719ce6-033d-4e45-8502-cc8ee8f091c0" (UID: "af719ce6-033d-4e45-8502-cc8ee8f091c0"). InnerVolumeSpecName "kube-api-access-p6dlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.486540 4750 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.486581 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.486594 4750 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af719ce6-033d-4e45-8502-cc8ee8f091c0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.486605 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6dlw\" (UniqueName: \"kubernetes.io/projected/af719ce6-033d-4e45-8502-cc8ee8f091c0-kube-api-access-p6dlw\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.515667 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "af719ce6-033d-4e45-8502-cc8ee8f091c0" (UID: "af719ce6-033d-4e45-8502-cc8ee8f091c0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.579337 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af719ce6-033d-4e45-8502-cc8ee8f091c0" (UID: "af719ce6-033d-4e45-8502-cc8ee8f091c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.588689 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.588732 4750 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.593206 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-config-data" (OuterVolumeSpecName: "config-data") pod "af719ce6-033d-4e45-8502-cc8ee8f091c0" (UID: "af719ce6-033d-4e45-8502-cc8ee8f091c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.703038 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af719ce6-033d-4e45-8502-cc8ee8f091c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:22 crc kubenswrapper[4750]: W0214 14:15:22.769537 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08e0cb68_c112_4ee0_aa62_68937f86dc89.slice/crio-a0ca3b184587df34bb341b5adee602458845d53aacf7b0109a7593a3ccbce08a WatchSource:0}: Error finding container a0ca3b184587df34bb341b5adee602458845d53aacf7b0109a7593a3ccbce08a: Status 404 returned error can't find the container with id a0ca3b184587df34bb341b5adee602458845d53aacf7b0109a7593a3ccbce08a Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.773464 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.876860 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.887778 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-c54ql"] Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.972908 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08e0cb68-c112-4ee0-aa62-68937f86dc89","Type":"ContainerStarted","Data":"a0ca3b184587df34bb341b5adee602458845d53aacf7b0109a7593a3ccbce08a"} Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.973865 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c","Type":"ContainerStarted","Data":"ecb30e180be15564e355cd7361bdbaa90e7c68621dcd7a8a955e55d139bb80a6"} Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.974789 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" event={"ID":"a920ce34-52b6-45f6-ac27-3aa85015c234","Type":"ContainerStarted","Data":"adf2db2a871a2658587fcff668a82dfe3ff38c2b6a2275d84b0d96826fe7e108"} Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.978640 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af719ce6-033d-4e45-8502-cc8ee8f091c0","Type":"ContainerDied","Data":"149ccb773971c1a0d06fde0036bec1d3d28c2c5b6bb1ed637570608130b133d7"} Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.978676 4750 scope.go:117] "RemoveContainer" containerID="82f408c6f9667ea08ce381d8cc840dfcabf3f29eacc1cf9b81be9c7ab3244bab" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.978790 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.983644 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77f9f44bff-kmkdt" event={"ID":"35b85269-938a-4bc4-8321-f11d72214b39","Type":"ContainerStarted","Data":"e9b2bb43b1ecfd3985bf73a3980bf540e70ce6fd60234a3161ae97989b009e95"} Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.985966 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" event={"ID":"d96f0e84-d627-4744-9bab-d6550d8bb7ee","Type":"ContainerStarted","Data":"e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf"} Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.986087 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" podUID="d96f0e84-d627-4744-9bab-d6550d8bb7ee" containerName="dnsmasq-dns" containerID="cri-o://e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf" gracePeriod=10 Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.986430 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.989449 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" event={"ID":"cd37e6f7-6e18-4587-8237-234b4d5cf12a","Type":"ContainerStarted","Data":"8aa3946ef882666a25b3b2854b0457e2f7ea4daacce4a50247fc3d8e6ae7dd08"} Feb 14 14:15:22 crc kubenswrapper[4750]: I0214 14:15:22.989478 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" event={"ID":"cd37e6f7-6e18-4587-8237-234b4d5cf12a","Type":"ContainerStarted","Data":"6ffb946c7b1266116c1ace6cb4d99b95eeb40721428b2ad760d170fe6c747792"} Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.012029 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" podStartSLOduration=5.01200893 podStartE2EDuration="5.01200893s" podCreationTimestamp="2026-02-14 14:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:23.001676158 +0000 UTC m=+1395.027665629" watchObservedRunningTime="2026-02-14 14:15:23.01200893 +0000 UTC m=+1395.037998411" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.036869 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-fdccf6dd6-ftmgl" podStartSLOduration=3.531396724 podStartE2EDuration="6.036853089s" podCreationTimestamp="2026-02-14 14:15:17 +0000 UTC" firstStartedPulling="2026-02-14 14:15:19.700033265 +0000 UTC m=+1391.726022746" lastFinishedPulling="2026-02-14 14:15:22.20548963 +0000 UTC m=+1394.231479111" observedRunningTime="2026-02-14 14:15:23.028055608 +0000 UTC m=+1395.054045089" watchObservedRunningTime="2026-02-14 14:15:23.036853089 +0000 UTC m=+1395.062842570" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.131138 4750 scope.go:117] "RemoveContainer" containerID="b7477af210cc5e31950b608be65e1fb707ce2c86b4963d1893068a9783ad71d9" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.168976 4750 scope.go:117] "RemoveContainer" containerID="0a293c585ee781e59bd34da2f1248f7689f07434f283f35826728f35e7bdbb80" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.192716 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.201701 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.225825 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:15:23 crc kubenswrapper[4750]: E0214 14:15:23.226289 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="proxy-httpd" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.226307 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="proxy-httpd" Feb 14 14:15:23 crc kubenswrapper[4750]: E0214 14:15:23.226331 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="sg-core" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.226338 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="sg-core" Feb 14 14:15:23 crc kubenswrapper[4750]: E0214 14:15:23.226349 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="ceilometer-notification-agent" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.226355 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="ceilometer-notification-agent" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.226556 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="proxy-httpd" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.226574 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="ceilometer-notification-agent" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.226599 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" containerName="sg-core" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.228461 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.235585 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.235813 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.263842 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.326361 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-scripts\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.326478 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-run-httpd\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.326521 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2h4\" (UniqueName: \"kubernetes.io/projected/6d1aa060-572b-4488-8cc5-ac1b792e2464-kube-api-access-ln2h4\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.326562 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.326625 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.326737 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-config-data\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.326830 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-log-httpd\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.428361 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-config-data\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.428653 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-log-httpd\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.428704 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-scripts\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.428773 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-run-httpd\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.428806 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2h4\" (UniqueName: \"kubernetes.io/projected/6d1aa060-572b-4488-8cc5-ac1b792e2464-kube-api-access-ln2h4\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.428849 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.428906 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.430427 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-log-httpd\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.430621 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-run-httpd\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.436015 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-scripts\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.436132 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.437895 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.440101 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-config-data\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.606250 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2h4\" (UniqueName: \"kubernetes.io/projected/6d1aa060-572b-4488-8cc5-ac1b792e2464-kube-api-access-ln2h4\") pod \"ceilometer-0\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.617106 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.743513 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-svc\") pod \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.743560 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-swift-storage-0\") pod \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.743596 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-config\") pod \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.743626 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zznc4\" (UniqueName: \"kubernetes.io/projected/d96f0e84-d627-4744-9bab-d6550d8bb7ee-kube-api-access-zznc4\") pod \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.743678 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-sb\") pod \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.743893 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-nb\") pod \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\" (UID: \"d96f0e84-d627-4744-9bab-d6550d8bb7ee\") " Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.786397 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96f0e84-d627-4744-9bab-d6550d8bb7ee-kube-api-access-zznc4" (OuterVolumeSpecName: "kube-api-access-zznc4") pod "d96f0e84-d627-4744-9bab-d6550d8bb7ee" (UID: "d96f0e84-d627-4744-9bab-d6550d8bb7ee"). InnerVolumeSpecName "kube-api-access-zznc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.847560 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zznc4\" (UniqueName: \"kubernetes.io/projected/d96f0e84-d627-4744-9bab-d6550d8bb7ee-kube-api-access-zznc4\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.854701 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.891914 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d96f0e84-d627-4744-9bab-d6550d8bb7ee" (UID: "d96f0e84-d627-4744-9bab-d6550d8bb7ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.905291 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-config" (OuterVolumeSpecName: "config") pod "d96f0e84-d627-4744-9bab-d6550d8bb7ee" (UID: "d96f0e84-d627-4744-9bab-d6550d8bb7ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.908682 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d96f0e84-d627-4744-9bab-d6550d8bb7ee" (UID: "d96f0e84-d627-4744-9bab-d6550d8bb7ee"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.916554 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d96f0e84-d627-4744-9bab-d6550d8bb7ee" (UID: "d96f0e84-d627-4744-9bab-d6550d8bb7ee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.969330 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.969590 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.969683 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:23 crc kubenswrapper[4750]: I0214 14:15:23.969742 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.026521 4750 generic.go:334] "Generic (PLEG): container finished" podID="a920ce34-52b6-45f6-ac27-3aa85015c234" containerID="cc9ad5fb23a2dd0e28a7b00c510ffa2cee4038e314473889ddf997de1bbf13e2" exitCode=0 Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.027388 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" event={"ID":"a920ce34-52b6-45f6-ac27-3aa85015c234","Type":"ContainerDied","Data":"cc9ad5fb23a2dd0e28a7b00c510ffa2cee4038e314473889ddf997de1bbf13e2"} Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.036639 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d96f0e84-d627-4744-9bab-d6550d8bb7ee" (UID: "d96f0e84-d627-4744-9bab-d6550d8bb7ee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.073762 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d96f0e84-d627-4744-9bab-d6550d8bb7ee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.134792 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.157355 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77f9f44bff-kmkdt" event={"ID":"35b85269-938a-4bc4-8321-f11d72214b39","Type":"ContainerStarted","Data":"390afe93b0914cd91598578e891fa0020822004d7bfbe4264784aaa7de42882b"} Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.191294 4750 generic.go:334] "Generic (PLEG): container finished" podID="d96f0e84-d627-4744-9bab-d6550d8bb7ee" containerID="e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf" exitCode=0 Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.191351 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" event={"ID":"d96f0e84-d627-4744-9bab-d6550d8bb7ee","Type":"ContainerDied","Data":"e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf"} Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.191377 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" event={"ID":"d96f0e84-d627-4744-9bab-d6550d8bb7ee","Type":"ContainerDied","Data":"8b6a481515cf9b8964a2c4e4dba369b2c1bc084f6168140f85419b87e1b132ff"} Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.191393 4750 scope.go:117] "RemoveContainer" containerID="e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.191496 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-bfp5k" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.232901 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-77f9f44bff-kmkdt" podStartSLOduration=4.334098528 podStartE2EDuration="7.232884547s" podCreationTimestamp="2026-02-14 14:15:17 +0000 UTC" firstStartedPulling="2026-02-14 14:15:19.339055215 +0000 UTC m=+1391.365044696" lastFinishedPulling="2026-02-14 14:15:22.237841234 +0000 UTC m=+1394.263830715" observedRunningTime="2026-02-14 14:15:24.211884294 +0000 UTC m=+1396.237873775" watchObservedRunningTime="2026-02-14 14:15:24.232884547 +0000 UTC m=+1396.258874028" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.252352 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08e0cb68-c112-4ee0-aa62-68937f86dc89","Type":"ContainerStarted","Data":"07b3549a303938acfea3f6f404a73861d8104708128b2a713c662b5d1e450fe4"} Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.319619 4750 scope.go:117] "RemoveContainer" containerID="4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.335608 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bfp5k"] Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.365611 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bfp5k"] Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.384928 4750 scope.go:117] "RemoveContainer" containerID="e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf" Feb 14 14:15:24 crc kubenswrapper[4750]: E0214 14:15:24.385422 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf\": container with ID starting with e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf not found: ID does not exist" containerID="e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.385445 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf"} err="failed to get container status \"e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf\": rpc error: code = NotFound desc = could not find container \"e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf\": container with ID starting with e6f1931acbb80ff5dec2b7357dd0b4c5b126bea027a1d0563b12ab22535d6eaf not found: ID does not exist" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.385465 4750 scope.go:117] "RemoveContainer" containerID="4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811" Feb 14 14:15:24 crc kubenswrapper[4750]: E0214 14:15:24.386293 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811\": container with ID starting with 4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811 not found: ID does not exist" containerID="4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.386338 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811"} err="failed to get container status \"4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811\": rpc error: code = NotFound desc = could not find container \"4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811\": container with ID starting with 4ee5718d6e870cd52fdc3338c4fd0371d707b0c69be7262582873532807ce811 not found: ID does not exist" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.492598 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-548b64445c-bzzcp"] Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.492948 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-548b64445c-bzzcp" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-api" containerID="cri-o://1d3d467e9a891a1c0c8dabd1c4ff606580b2f322b32310c827f3c2cd55817c27" gracePeriod=30 Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.496426 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-548b64445c-bzzcp" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-httpd" containerID="cri-o://07da7944871d7e2e574aa69ce678e0c729338206b6fc3d308302fd9df64e9bc3" gracePeriod=30 Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.559212 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b96685565-flxmp"] Feb 14 14:15:24 crc kubenswrapper[4750]: E0214 14:15:24.559934 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96f0e84-d627-4744-9bab-d6550d8bb7ee" containerName="init" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.559950 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96f0e84-d627-4744-9bab-d6550d8bb7ee" containerName="init" Feb 14 14:15:24 crc kubenswrapper[4750]: E0214 14:15:24.559972 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96f0e84-d627-4744-9bab-d6550d8bb7ee" containerName="dnsmasq-dns" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.559980 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96f0e84-d627-4744-9bab-d6550d8bb7ee" containerName="dnsmasq-dns" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.560248 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96f0e84-d627-4744-9bab-d6550d8bb7ee" containerName="dnsmasq-dns" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.561459 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.568930 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b96685565-flxmp"] Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.614605 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-public-tls-certs\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.614770 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-httpd-config\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.615620 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wflr5\" (UniqueName: \"kubernetes.io/projected/dbfc4c3a-8875-43db-8ca1-e829524d280f-kube-api-access-wflr5\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.616954 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-combined-ca-bundle\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.617003 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-ovndb-tls-certs\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.617045 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-internal-tls-certs\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.617087 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-config\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.687414 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.724432 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wflr5\" (UniqueName: \"kubernetes.io/projected/dbfc4c3a-8875-43db-8ca1-e829524d280f-kube-api-access-wflr5\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.724585 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-combined-ca-bundle\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.724643 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-ovndb-tls-certs\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.724682 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-internal-tls-certs\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.724741 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-config\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.724836 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-public-tls-certs\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.725021 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-httpd-config\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.734323 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-httpd-config\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.737923 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-config\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.738794 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-public-tls-certs\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.739801 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-internal-tls-certs\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.743919 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-combined-ca-bundle\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.747931 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbfc4c3a-8875-43db-8ca1-e829524d280f-ovndb-tls-certs\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.769584 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wflr5\" (UniqueName: \"kubernetes.io/projected/dbfc4c3a-8875-43db-8ca1-e829524d280f-kube-api-access-wflr5\") pod \"neutron-b96685565-flxmp\" (UID: \"dbfc4c3a-8875-43db-8ca1-e829524d280f\") " pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.782171 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af719ce6-033d-4e45-8502-cc8ee8f091c0" path="/var/lib/kubelet/pods/af719ce6-033d-4e45-8502-cc8ee8f091c0/volumes" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.783530 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96f0e84-d627-4744-9bab-d6550d8bb7ee" path="/var/lib/kubelet/pods/d96f0e84-d627-4744-9bab-d6550d8bb7ee/volumes" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.831887 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.918236 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-548b64445c-bzzcp" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.196:9696/\": read tcp 10.217.0.2:44628->10.217.0.196:9696: read: connection reset by peer" Feb 14 14:15:24 crc kubenswrapper[4750]: I0214 14:15:24.948088 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.319754 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66b79d5688-qxq94"] Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.322586 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.337170 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66b79d5688-qxq94"] Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.341320 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-config-data-custom\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.341372 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba0aa13e-484e-48c3-9326-f606f3f5d98c-logs\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.341392 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6mss\" (UniqueName: \"kubernetes.io/projected/ba0aa13e-484e-48c3-9326-f606f3f5d98c-kube-api-access-t6mss\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.341499 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-combined-ca-bundle\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.341530 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-public-tls-certs\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.341562 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-config-data\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.341615 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-internal-tls-certs\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.342602 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.342889 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.344005 4750 generic.go:334] "Generic (PLEG): container finished" podID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerID="07da7944871d7e2e574aa69ce678e0c729338206b6fc3d308302fd9df64e9bc3" exitCode=0 Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.344071 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548b64445c-bzzcp" event={"ID":"48bdd38e-f66d-438d-9303-17fa1c34cf74","Type":"ContainerDied","Data":"07da7944871d7e2e574aa69ce678e0c729338206b6fc3d308302fd9df64e9bc3"} Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.355479 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerStarted","Data":"6800324bd52ca53e23a841e570ddde07bd76bc9ab0151a9d14b26caf61b2eda0"} Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.388409 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" event={"ID":"a920ce34-52b6-45f6-ac27-3aa85015c234","Type":"ContainerStarted","Data":"003af2487d17274f859bb41798795b9cc1317e7cb38338e850cb05caa7b7e0ee"} Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.407479 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.407505 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.407533 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08e0cb68-c112-4ee0-aa62-68937f86dc89","Type":"ContainerStarted","Data":"61535dc5924b201b157107b59a416a8bc47c44fccfbc5842133e24aac042f33e"} Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.401671 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerName="cinder-api-log" containerID="cri-o://07b3549a303938acfea3f6f404a73861d8104708128b2a713c662b5d1e450fe4" gracePeriod=30 Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.401997 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerName="cinder-api" containerID="cri-o://61535dc5924b201b157107b59a416a8bc47c44fccfbc5842133e24aac042f33e" gracePeriod=30 Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.460617 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-config-data-custom\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.460763 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba0aa13e-484e-48c3-9326-f606f3f5d98c-logs\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.460802 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6mss\" (UniqueName: \"kubernetes.io/projected/ba0aa13e-484e-48c3-9326-f606f3f5d98c-kube-api-access-t6mss\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.461392 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-combined-ca-bundle\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.461522 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-public-tls-certs\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.461598 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-config-data\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.461789 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-internal-tls-certs\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.466823 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" podStartSLOduration=5.466799382 podStartE2EDuration="5.466799382s" podCreationTimestamp="2026-02-14 14:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:25.429870643 +0000 UTC m=+1397.455860124" watchObservedRunningTime="2026-02-14 14:15:25.466799382 +0000 UTC m=+1397.492788853" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.471011 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba0aa13e-484e-48c3-9326-f606f3f5d98c-logs\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.478313 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.478291256 podStartE2EDuration="5.478291256s" podCreationTimestamp="2026-02-14 14:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:25.446855547 +0000 UTC m=+1397.472845038" watchObservedRunningTime="2026-02-14 14:15:25.478291256 +0000 UTC m=+1397.504280737" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.478463 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-internal-tls-certs\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.479654 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-combined-ca-bundle\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.496670 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-config-data-custom\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.497666 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-public-tls-certs\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.497765 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba0aa13e-484e-48c3-9326-f606f3f5d98c-config-data\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.517156 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6mss\" (UniqueName: \"kubernetes.io/projected/ba0aa13e-484e-48c3-9326-f606f3f5d98c-kube-api-access-t6mss\") pod \"barbican-api-66b79d5688-qxq94\" (UID: \"ba0aa13e-484e-48c3-9326-f606f3f5d98c\") " pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.597725 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b96685565-flxmp"] Feb 14 14:15:25 crc kubenswrapper[4750]: W0214 14:15:25.620012 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbfc4c3a_8875_43db_8ca1_e829524d280f.slice/crio-370e37cc0125d2c846457d0a446d801cf18591a0fe874f827f3b0bbed52645cb WatchSource:0}: Error finding container 370e37cc0125d2c846457d0a446d801cf18591a0fe874f827f3b0bbed52645cb: Status 404 returned error can't find the container with id 370e37cc0125d2c846457d0a446d801cf18591a0fe874f827f3b0bbed52645cb Feb 14 14:15:25 crc kubenswrapper[4750]: I0214 14:15:25.674263 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:26 crc kubenswrapper[4750]: I0214 14:15:26.044131 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-548b64445c-bzzcp" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.196:9696/\": dial tcp 10.217.0.196:9696: connect: connection refused" Feb 14 14:15:26 crc kubenswrapper[4750]: I0214 14:15:26.407310 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66b79d5688-qxq94"] Feb 14 14:15:26 crc kubenswrapper[4750]: I0214 14:15:26.446358 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerStarted","Data":"4c558ea25893dfab1452765a2645380852a9815b1365a4fdb35e27ecdc0ff60f"} Feb 14 14:15:26 crc kubenswrapper[4750]: I0214 14:15:26.530598 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b96685565-flxmp" event={"ID":"dbfc4c3a-8875-43db-8ca1-e829524d280f","Type":"ContainerStarted","Data":"47a017226d4cc1ae2c2fc08b9e478bcde75547f97394bd0de70166777778d952"} Feb 14 14:15:26 crc kubenswrapper[4750]: I0214 14:15:26.530682 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b96685565-flxmp" event={"ID":"dbfc4c3a-8875-43db-8ca1-e829524d280f","Type":"ContainerStarted","Data":"370e37cc0125d2c846457d0a446d801cf18591a0fe874f827f3b0bbed52645cb"} Feb 14 14:15:26 crc kubenswrapper[4750]: I0214 14:15:26.584863 4750 generic.go:334] "Generic (PLEG): container finished" podID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerID="07b3549a303938acfea3f6f404a73861d8104708128b2a713c662b5d1e450fe4" exitCode=143 Feb 14 14:15:26 crc kubenswrapper[4750]: I0214 14:15:26.585033 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08e0cb68-c112-4ee0-aa62-68937f86dc89","Type":"ContainerDied","Data":"07b3549a303938acfea3f6f404a73861d8104708128b2a713c662b5d1e450fe4"} Feb 14 14:15:26 crc kubenswrapper[4750]: I0214 14:15:26.623994 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c","Type":"ContainerStarted","Data":"42bec5cca83c08c6fc191a0d596d269904c3ed7e814c73fbc7e219cd883f0ae4"} Feb 14 14:15:27 crc kubenswrapper[4750]: I0214 14:15:27.668014 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66b79d5688-qxq94" event={"ID":"ba0aa13e-484e-48c3-9326-f606f3f5d98c","Type":"ContainerStarted","Data":"4cc555228f18c6a989d1bbef9ce67555039576b30a1041327cfa6e8ff88b46b8"} Feb 14 14:15:27 crc kubenswrapper[4750]: I0214 14:15:27.670249 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b96685565-flxmp" event={"ID":"dbfc4c3a-8875-43db-8ca1-e829524d280f","Type":"ContainerStarted","Data":"82158b823c23fe20094e3b1fb0ec18d361ff571657c6055fc9b20d912556c6fc"} Feb 14 14:15:27 crc kubenswrapper[4750]: I0214 14:15:27.671229 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:27 crc kubenswrapper[4750]: I0214 14:15:27.692417 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b96685565-flxmp" podStartSLOduration=3.692397993 podStartE2EDuration="3.692397993s" podCreationTimestamp="2026-02-14 14:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:27.691791856 +0000 UTC m=+1399.717781357" watchObservedRunningTime="2026-02-14 14:15:27.692397993 +0000 UTC m=+1399.718387484" Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.807442 4750 generic.go:334] "Generic (PLEG): container finished" podID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerID="1d3d467e9a891a1c0c8dabd1c4ff606580b2f322b32310c827f3c2cd55817c27" exitCode=0 Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.816482 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548b64445c-bzzcp" event={"ID":"48bdd38e-f66d-438d-9303-17fa1c34cf74","Type":"ContainerDied","Data":"1d3d467e9a891a1c0c8dabd1c4ff606580b2f322b32310c827f3c2cd55817c27"} Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.843198 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c","Type":"ContainerStarted","Data":"fccfa1089d0500dfda55254642e7b516c6e155f009f78e045e4eb0f14e203718"} Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.871985 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66b79d5688-qxq94" event={"ID":"ba0aa13e-484e-48c3-9326-f606f3f5d98c","Type":"ContainerStarted","Data":"67c69090c0c4918f103940f321b9cdad0b0d7d0f1fb4441d2863c9d71fe5c738"} Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.872062 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.872073 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66b79d5688-qxq94" event={"ID":"ba0aa13e-484e-48c3-9326-f606f3f5d98c","Type":"ContainerStarted","Data":"d70f9d497db28ad779a58349a7dbc40df39fa4c62b3b2e6e9215b990f8f78ed4"} Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.872340 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.893616 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.8270288709999996 podStartE2EDuration="8.893581733s" podCreationTimestamp="2026-02-14 14:15:20 +0000 UTC" firstStartedPulling="2026-02-14 14:15:22.848676599 +0000 UTC m=+1394.874666080" lastFinishedPulling="2026-02-14 14:15:23.915229461 +0000 UTC m=+1395.941218942" observedRunningTime="2026-02-14 14:15:28.880787164 +0000 UTC m=+1400.906776645" watchObservedRunningTime="2026-02-14 14:15:28.893581733 +0000 UTC m=+1400.919571214" Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.911592 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.923199 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerStarted","Data":"6328e321c5128238b62cd61170d1cc699371c100f8abfd71bca33d6dadc9dc11"} Feb 14 14:15:28 crc kubenswrapper[4750]: I0214 14:15:28.948010 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66b79d5688-qxq94" podStartSLOduration=3.947987079 podStartE2EDuration="3.947987079s" podCreationTimestamp="2026-02-14 14:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:28.922807762 +0000 UTC m=+1400.948797243" watchObservedRunningTime="2026-02-14 14:15:28.947987079 +0000 UTC m=+1400.973976550" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.005204 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-public-tls-certs\") pod \"48bdd38e-f66d-438d-9303-17fa1c34cf74\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.071277 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "48bdd38e-f66d-438d-9303-17fa1c34cf74" (UID: "48bdd38e-f66d-438d-9303-17fa1c34cf74"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.106726 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-httpd-config\") pod \"48bdd38e-f66d-438d-9303-17fa1c34cf74\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.106808 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-ovndb-tls-certs\") pod \"48bdd38e-f66d-438d-9303-17fa1c34cf74\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.106832 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwznd\" (UniqueName: \"kubernetes.io/projected/48bdd38e-f66d-438d-9303-17fa1c34cf74-kube-api-access-nwznd\") pod \"48bdd38e-f66d-438d-9303-17fa1c34cf74\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.106870 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-internal-tls-certs\") pod \"48bdd38e-f66d-438d-9303-17fa1c34cf74\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.106932 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-config\") pod \"48bdd38e-f66d-438d-9303-17fa1c34cf74\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.107138 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-combined-ca-bundle\") pod \"48bdd38e-f66d-438d-9303-17fa1c34cf74\" (UID: \"48bdd38e-f66d-438d-9303-17fa1c34cf74\") " Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.107586 4750 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.110860 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48bdd38e-f66d-438d-9303-17fa1c34cf74-kube-api-access-nwznd" (OuterVolumeSpecName: "kube-api-access-nwznd") pod "48bdd38e-f66d-438d-9303-17fa1c34cf74" (UID: "48bdd38e-f66d-438d-9303-17fa1c34cf74"). InnerVolumeSpecName "kube-api-access-nwznd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.112593 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "48bdd38e-f66d-438d-9303-17fa1c34cf74" (UID: "48bdd38e-f66d-438d-9303-17fa1c34cf74"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.203968 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48bdd38e-f66d-438d-9303-17fa1c34cf74" (UID: "48bdd38e-f66d-438d-9303-17fa1c34cf74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.210354 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "48bdd38e-f66d-438d-9303-17fa1c34cf74" (UID: "48bdd38e-f66d-438d-9303-17fa1c34cf74"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.212488 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.212516 4750 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.212526 4750 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.212534 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwznd\" (UniqueName: \"kubernetes.io/projected/48bdd38e-f66d-438d-9303-17fa1c34cf74-kube-api-access-nwznd\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.215243 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "48bdd38e-f66d-438d-9303-17fa1c34cf74" (UID: "48bdd38e-f66d-438d-9303-17fa1c34cf74"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.247071 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-config" (OuterVolumeSpecName: "config") pod "48bdd38e-f66d-438d-9303-17fa1c34cf74" (UID: "48bdd38e-f66d-438d-9303-17fa1c34cf74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.314330 4750 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.314362 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/48bdd38e-f66d-438d-9303-17fa1c34cf74-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.934458 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-548b64445c-bzzcp" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.934456 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548b64445c-bzzcp" event={"ID":"48bdd38e-f66d-438d-9303-17fa1c34cf74","Type":"ContainerDied","Data":"b1bf2a7318fadb4916b2205d2d7a9fd615021dd00b8141ef7d18a04ce78f815b"} Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.934607 4750 scope.go:117] "RemoveContainer" containerID="07da7944871d7e2e574aa69ce678e0c729338206b6fc3d308302fd9df64e9bc3" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.938534 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerStarted","Data":"c7cabaf625f34638066ca7860ea67aa8702cf00a29c87d2efabca4af62eb8b22"} Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.972404 4750 scope.go:117] "RemoveContainer" containerID="1d3d467e9a891a1c0c8dabd1c4ff606580b2f322b32310c827f3c2cd55817c27" Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.973108 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-548b64445c-bzzcp"] Feb 14 14:15:29 crc kubenswrapper[4750]: I0214 14:15:29.989870 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-548b64445c-bzzcp"] Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.128749 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.128800 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.128842 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.129594 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3911c83e66f85b48593d6e540ca28f5e1b07698970a4746fbe1e53b6a3e79ffd"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.129650 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://3911c83e66f85b48593d6e540ca28f5e1b07698970a4746fbe1e53b6a3e79ffd" gracePeriod=600 Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.524504 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.719757 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.758740 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" path="/var/lib/kubelet/pods/48bdd38e-f66d-438d-9303-17fa1c34cf74/volumes" Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.771279 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.876653 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crnrm"] Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.877517 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" podUID="e30389ba-d985-4b42-b348-c8f0e8109e09" containerName="dnsmasq-dns" containerID="cri-o://9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d" gracePeriod=10 Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.960163 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerStarted","Data":"2091347404b7b0e4d10d6668b3463a36038b729c93414003c29b8aa92c32010d"} Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.960311 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.968209 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="3911c83e66f85b48593d6e540ca28f5e1b07698970a4746fbe1e53b6a3e79ffd" exitCode=0 Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.968267 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"3911c83e66f85b48593d6e540ca28f5e1b07698970a4746fbe1e53b6a3e79ffd"} Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.968291 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4"} Feb 14 14:15:30 crc kubenswrapper[4750]: I0214 14:15:30.968306 4750 scope.go:117] "RemoveContainer" containerID="02bb963338838db28a9a316102bca11f6cc643e0b302795fd6980c83c74ec211" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.004528 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.010569 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6642039029999998 podStartE2EDuration="8.010545777s" podCreationTimestamp="2026-02-14 14:15:23 +0000 UTC" firstStartedPulling="2026-02-14 14:15:24.704365636 +0000 UTC m=+1396.730355117" lastFinishedPulling="2026-02-14 14:15:30.05070751 +0000 UTC m=+1402.076696991" observedRunningTime="2026-02-14 14:15:30.989693918 +0000 UTC m=+1403.015683399" watchObservedRunningTime="2026-02-14 14:15:31.010545777 +0000 UTC m=+1403.036535258" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.279632 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.587858 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.621983 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-svc\") pod \"e30389ba-d985-4b42-b348-c8f0e8109e09\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.622583 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6rtj\" (UniqueName: \"kubernetes.io/projected/e30389ba-d985-4b42-b348-c8f0e8109e09-kube-api-access-v6rtj\") pod \"e30389ba-d985-4b42-b348-c8f0e8109e09\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.622896 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-sb\") pod \"e30389ba-d985-4b42-b348-c8f0e8109e09\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.623027 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-swift-storage-0\") pod \"e30389ba-d985-4b42-b348-c8f0e8109e09\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.623209 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-config\") pod \"e30389ba-d985-4b42-b348-c8f0e8109e09\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.623351 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-nb\") pod \"e30389ba-d985-4b42-b348-c8f0e8109e09\" (UID: \"e30389ba-d985-4b42-b348-c8f0e8109e09\") " Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.662329 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e30389ba-d985-4b42-b348-c8f0e8109e09-kube-api-access-v6rtj" (OuterVolumeSpecName: "kube-api-access-v6rtj") pod "e30389ba-d985-4b42-b348-c8f0e8109e09" (UID: "e30389ba-d985-4b42-b348-c8f0e8109e09"). InnerVolumeSpecName "kube-api-access-v6rtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.704030 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e30389ba-d985-4b42-b348-c8f0e8109e09" (UID: "e30389ba-d985-4b42-b348-c8f0e8109e09"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.726196 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.726240 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6rtj\" (UniqueName: \"kubernetes.io/projected/e30389ba-d985-4b42-b348-c8f0e8109e09-kube-api-access-v6rtj\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.753748 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e30389ba-d985-4b42-b348-c8f0e8109e09" (UID: "e30389ba-d985-4b42-b348-c8f0e8109e09"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.756186 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e30389ba-d985-4b42-b348-c8f0e8109e09" (UID: "e30389ba-d985-4b42-b348-c8f0e8109e09"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.783212 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e30389ba-d985-4b42-b348-c8f0e8109e09" (UID: "e30389ba-d985-4b42-b348-c8f0e8109e09"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.828956 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.829009 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.829021 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.852867 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-config" (OuterVolumeSpecName: "config") pod "e30389ba-d985-4b42-b348-c8f0e8109e09" (UID: "e30389ba-d985-4b42-b348-c8f0e8109e09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.931776 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30389ba-d985-4b42-b348-c8f0e8109e09-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.985035 4750 generic.go:334] "Generic (PLEG): container finished" podID="e30389ba-d985-4b42-b348-c8f0e8109e09" containerID="9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d" exitCode=0 Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.985094 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" event={"ID":"e30389ba-d985-4b42-b348-c8f0e8109e09","Type":"ContainerDied","Data":"9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d"} Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.985137 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" event={"ID":"e30389ba-d985-4b42-b348-c8f0e8109e09","Type":"ContainerDied","Data":"bb93a1924be9885db2ed04b327a7a39c539853d5fbb9bbf9cd6324176373db6c"} Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.985159 4750 scope.go:117] "RemoveContainer" containerID="9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d" Feb 14 14:15:31 crc kubenswrapper[4750]: I0214 14:15:31.985251 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-crnrm" Feb 14 14:15:32 crc kubenswrapper[4750]: I0214 14:15:32.026863 4750 scope.go:117] "RemoveContainer" containerID="5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e" Feb 14 14:15:32 crc kubenswrapper[4750]: I0214 14:15:32.069277 4750 scope.go:117] "RemoveContainer" containerID="9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d" Feb 14 14:15:32 crc kubenswrapper[4750]: E0214 14:15:32.069670 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d\": container with ID starting with 9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d not found: ID does not exist" containerID="9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d" Feb 14 14:15:32 crc kubenswrapper[4750]: I0214 14:15:32.069700 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d"} err="failed to get container status \"9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d\": rpc error: code = NotFound desc = could not find container \"9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d\": container with ID starting with 9d0ffa7c0a39b3514ab4da361abe74085511459e3540e2f8e0f0d79588baa44d not found: ID does not exist" Feb 14 14:15:32 crc kubenswrapper[4750]: I0214 14:15:32.069721 4750 scope.go:117] "RemoveContainer" containerID="5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e" Feb 14 14:15:32 crc kubenswrapper[4750]: E0214 14:15:32.070101 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e\": container with ID starting with 5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e not found: ID does not exist" containerID="5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e" Feb 14 14:15:32 crc kubenswrapper[4750]: I0214 14:15:32.070222 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e"} err="failed to get container status \"5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e\": rpc error: code = NotFound desc = could not find container \"5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e\": container with ID starting with 5efd93b3ec9bbf6fecabed99fecbf6a4d5437edfc508ee1578ccae31177c1d2e not found: ID does not exist" Feb 14 14:15:32 crc kubenswrapper[4750]: I0214 14:15:32.070242 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 14:15:32 crc kubenswrapper[4750]: I0214 14:15:32.082023 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crnrm"] Feb 14 14:15:32 crc kubenswrapper[4750]: I0214 14:15:32.093312 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crnrm"] Feb 14 14:15:32 crc kubenswrapper[4750]: I0214 14:15:32.757364 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e30389ba-d985-4b42-b348-c8f0e8109e09" path="/var/lib/kubelet/pods/e30389ba-d985-4b42-b348-c8f0e8109e09/volumes" Feb 14 14:15:33 crc kubenswrapper[4750]: I0214 14:15:33.003478 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerName="cinder-scheduler" containerID="cri-o://42bec5cca83c08c6fc191a0d596d269904c3ed7e814c73fbc7e219cd883f0ae4" gracePeriod=30 Feb 14 14:15:33 crc kubenswrapper[4750]: I0214 14:15:33.003540 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerName="probe" containerID="cri-o://fccfa1089d0500dfda55254642e7b516c6e155f009f78e045e4eb0f14e203718" gracePeriod=30 Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.035663 4750 generic.go:334] "Generic (PLEG): container finished" podID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerID="fccfa1089d0500dfda55254642e7b516c6e155f009f78e045e4eb0f14e203718" exitCode=0 Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.036310 4750 generic.go:334] "Generic (PLEG): container finished" podID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerID="42bec5cca83c08c6fc191a0d596d269904c3ed7e814c73fbc7e219cd883f0ae4" exitCode=0 Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.036358 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c","Type":"ContainerDied","Data":"fccfa1089d0500dfda55254642e7b516c6e155f009f78e045e4eb0f14e203718"} Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.036392 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c","Type":"ContainerDied","Data":"42bec5cca83c08c6fc191a0d596d269904c3ed7e814c73fbc7e219cd883f0ae4"} Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.117944 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.186065 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bptk\" (UniqueName: \"kubernetes.io/projected/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-kube-api-access-9bptk\") pod \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.187705 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data\") pod \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.187772 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-etc-machine-id\") pod \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.187915 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-scripts\") pod \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.188026 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data-custom\") pod \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.188054 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-combined-ca-bundle\") pod \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\" (UID: \"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c\") " Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.189796 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" (UID: "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.200306 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-scripts" (OuterVolumeSpecName: "scripts") pod "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" (UID: "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.200340 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-kube-api-access-9bptk" (OuterVolumeSpecName: "kube-api-access-9bptk") pod "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" (UID: "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c"). InnerVolumeSpecName "kube-api-access-9bptk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.201241 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" (UID: "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.264472 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" (UID: "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.291364 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bptk\" (UniqueName: \"kubernetes.io/projected/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-kube-api-access-9bptk\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.291636 4750 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.291895 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.291988 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.292070 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.355715 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data" (OuterVolumeSpecName: "config-data") pod "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" (UID: "0ba0e5ef-8d01-408d-ba8a-277bfc6c832c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.393628 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:34 crc kubenswrapper[4750]: I0214 14:15:34.433940 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.048052 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba0e5ef-8d01-408d-ba8a-277bfc6c832c","Type":"ContainerDied","Data":"ecb30e180be15564e355cd7361bdbaa90e7c68621dcd7a8a955e55d139bb80a6"} Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.048106 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.048545 4750 scope.go:117] "RemoveContainer" containerID="fccfa1089d0500dfda55254642e7b516c6e155f009f78e045e4eb0f14e203718" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.074530 4750 scope.go:117] "RemoveContainer" containerID="42bec5cca83c08c6fc191a0d596d269904c3ed7e814c73fbc7e219cd883f0ae4" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.098886 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.121178 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.139201 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 14:15:35 crc kubenswrapper[4750]: E0214 14:15:35.139713 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-httpd" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.139731 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-httpd" Feb 14 14:15:35 crc kubenswrapper[4750]: E0214 14:15:35.139744 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30389ba-d985-4b42-b348-c8f0e8109e09" containerName="dnsmasq-dns" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.139751 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30389ba-d985-4b42-b348-c8f0e8109e09" containerName="dnsmasq-dns" Feb 14 14:15:35 crc kubenswrapper[4750]: E0214 14:15:35.139771 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30389ba-d985-4b42-b348-c8f0e8109e09" containerName="init" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.139778 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30389ba-d985-4b42-b348-c8f0e8109e09" containerName="init" Feb 14 14:15:35 crc kubenswrapper[4750]: E0214 14:15:35.139793 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerName="cinder-scheduler" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.139799 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerName="cinder-scheduler" Feb 14 14:15:35 crc kubenswrapper[4750]: E0214 14:15:35.139824 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerName="probe" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.139830 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerName="probe" Feb 14 14:15:35 crc kubenswrapper[4750]: E0214 14:15:35.139839 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-api" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.139844 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-api" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.140050 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-httpd" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.140069 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e30389ba-d985-4b42-b348-c8f0e8109e09" containerName="dnsmasq-dns" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.140089 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bdd38e-f66d-438d-9303-17fa1c34cf74" containerName="neutron-api" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.140099 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerName="cinder-scheduler" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.140129 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" containerName="probe" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.144480 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.144595 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.146394 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.214195 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-scripts\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.214278 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwh6r\" (UniqueName: \"kubernetes.io/projected/41bd633c-6afd-4c10-a933-287724b60a3d-kube-api-access-xwh6r\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.214310 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-config-data\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.214512 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.214756 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.214918 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41bd633c-6afd-4c10-a933-287724b60a3d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.317032 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.317180 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.317265 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41bd633c-6afd-4c10-a933-287724b60a3d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.317364 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-scripts\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.317446 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwh6r\" (UniqueName: \"kubernetes.io/projected/41bd633c-6afd-4c10-a933-287724b60a3d-kube-api-access-xwh6r\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.317490 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-config-data\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.318013 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41bd633c-6afd-4c10-a933-287724b60a3d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.324476 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.324698 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.325333 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-config-data\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.327843 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41bd633c-6afd-4c10-a933-287724b60a3d-scripts\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.345062 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwh6r\" (UniqueName: \"kubernetes.io/projected/41bd633c-6afd-4c10-a933-287724b60a3d-kube-api-access-xwh6r\") pod \"cinder-scheduler-0\" (UID: \"41bd633c-6afd-4c10-a933-287724b60a3d\") " pod="openstack/cinder-scheduler-0" Feb 14 14:15:35 crc kubenswrapper[4750]: I0214 14:15:35.466282 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 14:15:36 crc kubenswrapper[4750]: I0214 14:15:36.179164 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 14:15:36 crc kubenswrapper[4750]: I0214 14:15:36.757992 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba0e5ef-8d01-408d-ba8a-277bfc6c832c" path="/var/lib/kubelet/pods/0ba0e5ef-8d01-408d-ba8a-277bfc6c832c/volumes" Feb 14 14:15:37 crc kubenswrapper[4750]: I0214 14:15:37.079136 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"41bd633c-6afd-4c10-a933-287724b60a3d","Type":"ContainerStarted","Data":"c2059675520875961c97650f1c9a6eb737a2fa77ba07daf97d67f4eddd77465b"} Feb 14 14:15:37 crc kubenswrapper[4750]: I0214 14:15:37.079484 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"41bd633c-6afd-4c10-a933-287724b60a3d","Type":"ContainerStarted","Data":"2164a44fb326025b380b3780bf87b593161f15683174199a6d792666954b86aa"} Feb 14 14:15:37 crc kubenswrapper[4750]: I0214 14:15:37.245850 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:37 crc kubenswrapper[4750]: I0214 14:15:37.466265 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66b79d5688-qxq94" Feb 14 14:15:37 crc kubenswrapper[4750]: I0214 14:15:37.537247 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bdf8d8ffd-twqjc"] Feb 14 14:15:37 crc kubenswrapper[4750]: I0214 14:15:37.537688 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api-log" containerID="cri-o://7edfcc248a9f185b3b0c1d97c78cf29022a123fdb114d1d979c7169bcb9984b5" gracePeriod=30 Feb 14 14:15:37 crc kubenswrapper[4750]: I0214 14:15:37.538102 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api" containerID="cri-o://14b64fe7b62b303c18c48742ac77d970a1b629c6bd4064ad73e6681d5f2f19bc" gracePeriod=30 Feb 14 14:15:38 crc kubenswrapper[4750]: I0214 14:15:38.088955 4750 generic.go:334] "Generic (PLEG): container finished" podID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerID="7edfcc248a9f185b3b0c1d97c78cf29022a123fdb114d1d979c7169bcb9984b5" exitCode=143 Feb 14 14:15:38 crc kubenswrapper[4750]: I0214 14:15:38.089028 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" event={"ID":"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f","Type":"ContainerDied","Data":"7edfcc248a9f185b3b0c1d97c78cf29022a123fdb114d1d979c7169bcb9984b5"} Feb 14 14:15:38 crc kubenswrapper[4750]: I0214 14:15:38.091544 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"41bd633c-6afd-4c10-a933-287724b60a3d","Type":"ContainerStarted","Data":"5c32b1f5edfd371faabde5b6b1df02e60ed0dcb28c872d7189f2fa2b0d51200a"} Feb 14 14:15:38 crc kubenswrapper[4750]: I0214 14:15:38.119276 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.11926204 podStartE2EDuration="3.11926204s" podCreationTimestamp="2026-02-14 14:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:38.115303321 +0000 UTC m=+1410.141292802" watchObservedRunningTime="2026-02-14 14:15:38.11926204 +0000 UTC m=+1410.145251521" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.327084 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.392633 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.467285 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.618540 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5474bc9d4d-7h6tg"] Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.620275 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.643614 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5474bc9d4d-7h6tg"] Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.665718 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-public-tls-certs\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.665800 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e7913f-cd87-4593-9345-e10614cac99b-logs\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.665823 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-config-data\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.665843 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-scripts\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.665902 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-internal-tls-certs\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.665936 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqk5\" (UniqueName: \"kubernetes.io/projected/c6e7913f-cd87-4593-9345-e10614cac99b-kube-api-access-txqk5\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.665995 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-combined-ca-bundle\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.767866 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-public-tls-certs\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.767980 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e7913f-cd87-4593-9345-e10614cac99b-logs\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.768006 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-config-data\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.768461 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e7913f-cd87-4593-9345-e10614cac99b-logs\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.768527 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-scripts\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.768964 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-internal-tls-certs\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.769005 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqk5\" (UniqueName: \"kubernetes.io/projected/c6e7913f-cd87-4593-9345-e10614cac99b-kube-api-access-txqk5\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.769072 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-combined-ca-bundle\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.773651 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-combined-ca-bundle\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.775872 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-public-tls-certs\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.784644 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-scripts\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.786367 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-config-data\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.788487 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7913f-cd87-4593-9345-e10614cac99b-internal-tls-certs\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.796830 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqk5\" (UniqueName: \"kubernetes.io/projected/c6e7913f-cd87-4593-9345-e10614cac99b-kube-api-access-txqk5\") pod \"placement-5474bc9d4d-7h6tg\" (UID: \"c6e7913f-cd87-4593-9345-e10614cac99b\") " pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.942919 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.990074 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:54074->10.217.0.203:9311: read: connection reset by peer" Feb 14 14:15:40 crc kubenswrapper[4750]: I0214 14:15:40.990144 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:54088->10.217.0.203:9311: read: connection reset by peer" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.199681 4750 generic.go:334] "Generic (PLEG): container finished" podID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerID="14b64fe7b62b303c18c48742ac77d970a1b629c6bd4064ad73e6681d5f2f19bc" exitCode=0 Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.200347 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" event={"ID":"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f","Type":"ContainerDied","Data":"14b64fe7b62b303c18c48742ac77d970a1b629c6bd4064ad73e6681d5f2f19bc"} Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.495827 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.597361 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-logs\") pod \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.597524 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-combined-ca-bundle\") pod \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.597616 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data-custom\") pod \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.597644 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p2sr\" (UniqueName: \"kubernetes.io/projected/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-kube-api-access-2p2sr\") pod \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.597731 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data\") pod \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\" (UID: \"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f\") " Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.598725 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-logs" (OuterVolumeSpecName: "logs") pod "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" (UID: "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.604015 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" (UID: "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.605941 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-kube-api-access-2p2sr" (OuterVolumeSpecName: "kube-api-access-2p2sr") pod "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" (UID: "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f"). InnerVolumeSpecName "kube-api-access-2p2sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.609460 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5474bc9d4d-7h6tg"] Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.636131 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" (UID: "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.662723 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data" (OuterVolumeSpecName: "config-data") pod "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" (UID: "6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.699977 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.700269 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p2sr\" (UniqueName: \"kubernetes.io/projected/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-kube-api-access-2p2sr\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.700282 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.700291 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.700301 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:41 crc kubenswrapper[4750]: I0214 14:15:41.712037 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7bb6cf9d49-hj8cz" Feb 14 14:15:42 crc kubenswrapper[4750]: I0214 14:15:42.219935 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5474bc9d4d-7h6tg" event={"ID":"c6e7913f-cd87-4593-9345-e10614cac99b","Type":"ContainerStarted","Data":"ebd19bff3108af9a6f52f3541b6e0c679d34729d61c1334aadf2bcaf6a1b84ae"} Feb 14 14:15:42 crc kubenswrapper[4750]: I0214 14:15:42.220209 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5474bc9d4d-7h6tg" event={"ID":"c6e7913f-cd87-4593-9345-e10614cac99b","Type":"ContainerStarted","Data":"887164b3e4cf80e6e65871af6b6333f887f3127bbf5d00521a97c17fe35a785c"} Feb 14 14:15:42 crc kubenswrapper[4750]: I0214 14:15:42.223852 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" event={"ID":"6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f","Type":"ContainerDied","Data":"bbae6a5bc9b1524c05bbbc9b85c64c2ab60ec59aa437e93fe9bf4c1a3db05266"} Feb 14 14:15:42 crc kubenswrapper[4750]: I0214 14:15:42.223903 4750 scope.go:117] "RemoveContainer" containerID="14b64fe7b62b303c18c48742ac77d970a1b629c6bd4064ad73e6681d5f2f19bc" Feb 14 14:15:42 crc kubenswrapper[4750]: I0214 14:15:42.224034 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bdf8d8ffd-twqjc" Feb 14 14:15:42 crc kubenswrapper[4750]: I0214 14:15:42.262677 4750 scope.go:117] "RemoveContainer" containerID="7edfcc248a9f185b3b0c1d97c78cf29022a123fdb114d1d979c7169bcb9984b5" Feb 14 14:15:42 crc kubenswrapper[4750]: I0214 14:15:42.274537 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bdf8d8ffd-twqjc"] Feb 14 14:15:42 crc kubenswrapper[4750]: I0214 14:15:42.295443 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6bdf8d8ffd-twqjc"] Feb 14 14:15:42 crc kubenswrapper[4750]: I0214 14:15:42.778338 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" path="/var/lib/kubelet/pods/6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f/volumes" Feb 14 14:15:43 crc kubenswrapper[4750]: I0214 14:15:43.236915 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5474bc9d4d-7h6tg" event={"ID":"c6e7913f-cd87-4593-9345-e10614cac99b","Type":"ContainerStarted","Data":"3cef4cf321b9cb101450aedae80e275e8c7e6995c8d9ed8f5a77f0dfccf07954"} Feb 14 14:15:43 crc kubenswrapper[4750]: I0214 14:15:43.237417 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:43 crc kubenswrapper[4750]: I0214 14:15:43.237437 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.348312 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5474bc9d4d-7h6tg" podStartSLOduration=4.348293765 podStartE2EDuration="4.348293765s" podCreationTimestamp="2026-02-14 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:43.266582038 +0000 UTC m=+1415.292571529" watchObservedRunningTime="2026-02-14 14:15:44.348293765 +0000 UTC m=+1416.374283236" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.349387 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 14 14:15:44 crc kubenswrapper[4750]: E0214 14:15:44.349788 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.349800 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api" Feb 14 14:15:44 crc kubenswrapper[4750]: E0214 14:15:44.349821 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api-log" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.349827 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api-log" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.350082 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.350100 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd925a1-0c7d-40c9-a3dc-33a987cc0a3f" containerName="barbican-api-log" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.350889 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.352658 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-ppc8g" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.353044 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.353639 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.372307 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.472831 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbpvb\" (UniqueName: \"kubernetes.io/projected/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-kube-api-access-bbpvb\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.473281 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.473324 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-openstack-config-secret\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.473352 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-openstack-config\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.575051 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbpvb\" (UniqueName: \"kubernetes.io/projected/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-kube-api-access-bbpvb\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.575257 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.575305 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-openstack-config-secret\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.575342 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-openstack-config\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.576284 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-openstack-config\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.582639 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-openstack-config-secret\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.583664 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.599563 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbpvb\" (UniqueName: \"kubernetes.io/projected/5d9cc9c1-5726-4507-bc40-a27f3aee83c4-kube-api-access-bbpvb\") pod \"openstackclient\" (UID: \"5d9cc9c1-5726-4507-bc40-a27f3aee83c4\") " pod="openstack/openstackclient" Feb 14 14:15:44 crc kubenswrapper[4750]: I0214 14:15:44.673212 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 14:15:45 crc kubenswrapper[4750]: I0214 14:15:45.191602 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 14:15:45 crc kubenswrapper[4750]: I0214 14:15:45.264824 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5d9cc9c1-5726-4507-bc40-a27f3aee83c4","Type":"ContainerStarted","Data":"789add6f8568ab16b2daa6507e3aa33a66889e9a94b9e8fed5a23b53d5494566"} Feb 14 14:15:45 crc kubenswrapper[4750]: I0214 14:15:45.722860 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 14:15:47 crc kubenswrapper[4750]: I0214 14:15:47.448862 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:15:47 crc kubenswrapper[4750]: I0214 14:15:47.449479 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9e9f3055-374a-4522-ac07-4f8d55550521" containerName="glance-httpd" containerID="cri-o://dc95b3e8dedffcd3a1d7f033d7f682e5cc019910dd3996c0c865a43945ed6df9" gracePeriod=30 Feb 14 14:15:47 crc kubenswrapper[4750]: I0214 14:15:47.449400 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9e9f3055-374a-4522-ac07-4f8d55550521" containerName="glance-log" containerID="cri-o://bf6c6268f246dc15845d8dcf1509ad49e2366bf737afe42c69193cbe3c3c7e44" gracePeriod=30 Feb 14 14:15:48 crc kubenswrapper[4750]: I0214 14:15:48.317691 4750 generic.go:334] "Generic (PLEG): container finished" podID="9e9f3055-374a-4522-ac07-4f8d55550521" containerID="bf6c6268f246dc15845d8dcf1509ad49e2366bf737afe42c69193cbe3c3c7e44" exitCode=143 Feb 14 14:15:48 crc kubenswrapper[4750]: I0214 14:15:48.317771 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e9f3055-374a-4522-ac07-4f8d55550521","Type":"ContainerDied","Data":"bf6c6268f246dc15845d8dcf1509ad49e2366bf737afe42c69193cbe3c3c7e44"} Feb 14 14:15:48 crc kubenswrapper[4750]: I0214 14:15:48.978445 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85b6899f7-wlrpr"] Feb 14 14:15:48 crc kubenswrapper[4750]: I0214 14:15:48.991216 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:48 crc kubenswrapper[4750]: I0214 14:15:48.995643 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 14:15:48 crc kubenswrapper[4750]: I0214 14:15:48.995900 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 14 14:15:48 crc kubenswrapper[4750]: I0214 14:15:48.996072 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.061604 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b6899f7-wlrpr"] Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.075427 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-internal-tls-certs\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.075512 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-log-httpd\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.075564 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-combined-ca-bundle\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.075590 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-public-tls-certs\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.075655 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsxtp\" (UniqueName: \"kubernetes.io/projected/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-kube-api-access-jsxtp\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.075676 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-etc-swift\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.075706 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-config-data\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.075722 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-run-httpd\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.190663 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-combined-ca-bundle\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.190769 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-public-tls-certs\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.190978 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsxtp\" (UniqueName: \"kubernetes.io/projected/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-kube-api-access-jsxtp\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.191019 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-etc-swift\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.191122 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-config-data\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.191169 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-run-httpd\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.191343 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-internal-tls-certs\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.191476 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-log-httpd\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.192142 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-log-httpd\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.203641 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-run-httpd\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.226540 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-combined-ca-bundle\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.254389 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-internal-tls-certs\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.265052 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-public-tls-certs\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.274897 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-etc-swift\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.295288 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-config-data\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.345072 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsxtp\" (UniqueName: \"kubernetes.io/projected/1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c-kube-api-access-jsxtp\") pod \"swift-proxy-85b6899f7-wlrpr\" (UID: \"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c\") " pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.618377 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.924569 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.925044 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="ceilometer-central-agent" containerID="cri-o://4c558ea25893dfab1452765a2645380852a9815b1365a4fdb35e27ecdc0ff60f" gracePeriod=30 Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.925197 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="proxy-httpd" containerID="cri-o://2091347404b7b0e4d10d6668b3463a36038b729c93414003c29b8aa92c32010d" gracePeriod=30 Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.925263 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="sg-core" containerID="cri-o://c7cabaf625f34638066ca7860ea67aa8702cf00a29c87d2efabca4af62eb8b22" gracePeriod=30 Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.925307 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="ceilometer-notification-agent" containerID="cri-o://6328e321c5128238b62cd61170d1cc699371c100f8abfd71bca33d6dadc9dc11" gracePeriod=30 Feb 14 14:15:49 crc kubenswrapper[4750]: I0214 14:15:49.936635 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.065793 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.067555 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerName="glance-log" containerID="cri-o://24931750f8871d87006403489c140a3a45c0838df9d567df9bd51dd32cb9f494" gracePeriod=30 Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.068146 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerName="glance-httpd" containerID="cri-o://56e7c5366cb961c2a611d76be1397150ba751cb3ec931f4b0530edd82af5b192" gracePeriod=30 Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.304838 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b6899f7-wlrpr"] Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.394221 4750 generic.go:334] "Generic (PLEG): container finished" podID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerID="24931750f8871d87006403489c140a3a45c0838df9d567df9bd51dd32cb9f494" exitCode=143 Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.394297 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"98f056b9-dcdd-4ef7-99b8-51e5eca665d8","Type":"ContainerDied","Data":"24931750f8871d87006403489c140a3a45c0838df9d567df9bd51dd32cb9f494"} Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.397579 4750 generic.go:334] "Generic (PLEG): container finished" podID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerID="2091347404b7b0e4d10d6668b3463a36038b729c93414003c29b8aa92c32010d" exitCode=0 Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.397625 4750 generic.go:334] "Generic (PLEG): container finished" podID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerID="c7cabaf625f34638066ca7860ea67aa8702cf00a29c87d2efabca4af62eb8b22" exitCode=2 Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.397633 4750 generic.go:334] "Generic (PLEG): container finished" podID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerID="4c558ea25893dfab1452765a2645380852a9815b1365a4fdb35e27ecdc0ff60f" exitCode=0 Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.397653 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerDied","Data":"2091347404b7b0e4d10d6668b3463a36038b729c93414003c29b8aa92c32010d"} Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.397677 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerDied","Data":"c7cabaf625f34638066ca7860ea67aa8702cf00a29c87d2efabca4af62eb8b22"} Feb 14 14:15:50 crc kubenswrapper[4750]: I0214 14:15:50.397702 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerDied","Data":"4c558ea25893dfab1452765a2645380852a9815b1365a4fdb35e27ecdc0ff60f"} Feb 14 14:15:51 crc kubenswrapper[4750]: I0214 14:15:51.410002 4750 generic.go:334] "Generic (PLEG): container finished" podID="9e9f3055-374a-4522-ac07-4f8d55550521" containerID="dc95b3e8dedffcd3a1d7f033d7f682e5cc019910dd3996c0c865a43945ed6df9" exitCode=0 Feb 14 14:15:51 crc kubenswrapper[4750]: I0214 14:15:51.410335 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e9f3055-374a-4522-ac07-4f8d55550521","Type":"ContainerDied","Data":"dc95b3e8dedffcd3a1d7f033d7f682e5cc019910dd3996c0c865a43945ed6df9"} Feb 14 14:15:53 crc kubenswrapper[4750]: I0214 14:15:53.438324 4750 generic.go:334] "Generic (PLEG): container finished" podID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerID="56e7c5366cb961c2a611d76be1397150ba751cb3ec931f4b0530edd82af5b192" exitCode=0 Feb 14 14:15:53 crc kubenswrapper[4750]: I0214 14:15:53.438383 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"98f056b9-dcdd-4ef7-99b8-51e5eca665d8","Type":"ContainerDied","Data":"56e7c5366cb961c2a611d76be1397150ba751cb3ec931f4b0530edd82af5b192"} Feb 14 14:15:53 crc kubenswrapper[4750]: I0214 14:15:53.442348 4750 generic.go:334] "Generic (PLEG): container finished" podID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerID="6328e321c5128238b62cd61170d1cc699371c100f8abfd71bca33d6dadc9dc11" exitCode=0 Feb 14 14:15:53 crc kubenswrapper[4750]: I0214 14:15:53.442394 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerDied","Data":"6328e321c5128238b62cd61170d1cc699371c100f8abfd71bca33d6dadc9dc11"} Feb 14 14:15:53 crc kubenswrapper[4750]: I0214 14:15:53.857190 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.207:3000/\": dial tcp 10.217.0.207:3000: connect: connection refused" Feb 14 14:15:54 crc kubenswrapper[4750]: I0214 14:15:54.962412 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b96685565-flxmp" Feb 14 14:15:55 crc kubenswrapper[4750]: I0214 14:15:55.040820 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-789b68c888-x8zqh"] Feb 14 14:15:55 crc kubenswrapper[4750]: I0214 14:15:55.041372 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-789b68c888-x8zqh" podUID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerName="neutron-api" containerID="cri-o://f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee" gracePeriod=30 Feb 14 14:15:55 crc kubenswrapper[4750]: I0214 14:15:55.041454 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-789b68c888-x8zqh" podUID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerName="neutron-httpd" containerID="cri-o://682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab" gracePeriod=30 Feb 14 14:15:55 crc kubenswrapper[4750]: I0214 14:15:55.470412 4750 generic.go:334] "Generic (PLEG): container finished" podID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerID="682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab" exitCode=0 Feb 14 14:15:55 crc kubenswrapper[4750]: I0214 14:15:55.470475 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-789b68c888-x8zqh" event={"ID":"925a32d5-f7e2-4339-84ea-a5011abb19f1","Type":"ContainerDied","Data":"682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab"} Feb 14 14:15:55 crc kubenswrapper[4750]: I0214 14:15:55.960037 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.206:8776/healthcheck\": dial tcp 10.217.0.206:8776: connect: connection refused" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.335154 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-j27zc"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.336749 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j27zc" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.352354 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j27zc"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.432022 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lm7k9"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.434299 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.465131 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8f8b-account-create-update-8qtk7"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.466657 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.469195 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.487123 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhvp7\" (UniqueName: \"kubernetes.io/projected/e32c67d8-34c3-4340-b584-704ba0540dc7-kube-api-access-jhvp7\") pod \"nova-cell0-db-create-lm7k9\" (UID: \"e32c67d8-34c3-4340-b584-704ba0540dc7\") " pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.487197 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trt97\" (UniqueName: \"kubernetes.io/projected/55c02c14-a300-466f-b53c-8f7a96be136a-kube-api-access-trt97\") pod \"nova-api-db-create-j27zc\" (UID: \"55c02c14-a300-466f-b53c-8f7a96be136a\") " pod="openstack/nova-api-db-create-j27zc" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.487232 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55c02c14-a300-466f-b53c-8f7a96be136a-operator-scripts\") pod \"nova-api-db-create-j27zc\" (UID: \"55c02c14-a300-466f-b53c-8f7a96be136a\") " pod="openstack/nova-api-db-create-j27zc" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.487278 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e32c67d8-34c3-4340-b584-704ba0540dc7-operator-scripts\") pod \"nova-cell0-db-create-lm7k9\" (UID: \"e32c67d8-34c3-4340-b584-704ba0540dc7\") " pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.492459 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lm7k9"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.533484 4750 generic.go:334] "Generic (PLEG): container finished" podID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerID="61535dc5924b201b157107b59a416a8bc47c44fccfbc5842133e24aac042f33e" exitCode=137 Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.533536 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08e0cb68-c112-4ee0-aa62-68937f86dc89","Type":"ContainerDied","Data":"61535dc5924b201b157107b59a416a8bc47c44fccfbc5842133e24aac042f33e"} Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.544177 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8f8b-account-create-update-8qtk7"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.598298 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhvp7\" (UniqueName: \"kubernetes.io/projected/e32c67d8-34c3-4340-b584-704ba0540dc7-kube-api-access-jhvp7\") pod \"nova-cell0-db-create-lm7k9\" (UID: \"e32c67d8-34c3-4340-b584-704ba0540dc7\") " pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.598639 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trt97\" (UniqueName: \"kubernetes.io/projected/55c02c14-a300-466f-b53c-8f7a96be136a-kube-api-access-trt97\") pod \"nova-api-db-create-j27zc\" (UID: \"55c02c14-a300-466f-b53c-8f7a96be136a\") " pod="openstack/nova-api-db-create-j27zc" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.598717 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55c02c14-a300-466f-b53c-8f7a96be136a-operator-scripts\") pod \"nova-api-db-create-j27zc\" (UID: \"55c02c14-a300-466f-b53c-8f7a96be136a\") " pod="openstack/nova-api-db-create-j27zc" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.598808 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e32c67d8-34c3-4340-b584-704ba0540dc7-operator-scripts\") pod \"nova-cell0-db-create-lm7k9\" (UID: \"e32c67d8-34c3-4340-b584-704ba0540dc7\") " pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.598889 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-operator-scripts\") pod \"nova-api-8f8b-account-create-update-8qtk7\" (UID: \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\") " pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.598982 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crrxv\" (UniqueName: \"kubernetes.io/projected/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-kube-api-access-crrxv\") pod \"nova-api-8f8b-account-create-update-8qtk7\" (UID: \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\") " pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.599935 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55c02c14-a300-466f-b53c-8f7a96be136a-operator-scripts\") pod \"nova-api-db-create-j27zc\" (UID: \"55c02c14-a300-466f-b53c-8f7a96be136a\") " pod="openstack/nova-api-db-create-j27zc" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.599956 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e32c67d8-34c3-4340-b584-704ba0540dc7-operator-scripts\") pod \"nova-cell0-db-create-lm7k9\" (UID: \"e32c67d8-34c3-4340-b584-704ba0540dc7\") " pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.627890 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhvp7\" (UniqueName: \"kubernetes.io/projected/e32c67d8-34c3-4340-b584-704ba0540dc7-kube-api-access-jhvp7\") pod \"nova-cell0-db-create-lm7k9\" (UID: \"e32c67d8-34c3-4340-b584-704ba0540dc7\") " pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.633937 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trt97\" (UniqueName: \"kubernetes.io/projected/55c02c14-a300-466f-b53c-8f7a96be136a-kube-api-access-trt97\") pod \"nova-api-db-create-j27zc\" (UID: \"55c02c14-a300-466f-b53c-8f7a96be136a\") " pod="openstack/nova-api-db-create-j27zc" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.661675 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j27zc" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.679183 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zxk9n"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.681145 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.701306 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9dff-account-create-update-sjzvt"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.710638 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqw2g\" (UniqueName: \"kubernetes.io/projected/c038eeb5-80d1-4e94-a5de-31b056dac6f7-kube-api-access-wqw2g\") pod \"nova-cell1-db-create-zxk9n\" (UID: \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\") " pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.710697 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-operator-scripts\") pod \"nova-api-8f8b-account-create-update-8qtk7\" (UID: \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\") " pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.710732 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c038eeb5-80d1-4e94-a5de-31b056dac6f7-operator-scripts\") pod \"nova-cell1-db-create-zxk9n\" (UID: \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\") " pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.710752 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crrxv\" (UniqueName: \"kubernetes.io/projected/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-kube-api-access-crrxv\") pod \"nova-api-8f8b-account-create-update-8qtk7\" (UID: \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\") " pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.711367 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.711853 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-operator-scripts\") pod \"nova-api-8f8b-account-create-update-8qtk7\" (UID: \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\") " pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.714096 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.722215 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zxk9n"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.741030 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9dff-account-create-update-sjzvt"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.747234 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crrxv\" (UniqueName: \"kubernetes.io/projected/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-kube-api-access-crrxv\") pod \"nova-api-8f8b-account-create-update-8qtk7\" (UID: \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\") " pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.777971 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.813370 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcd9j\" (UniqueName: \"kubernetes.io/projected/1d155439-48d6-4b0b-b096-70e2527f2d7f-kube-api-access-bcd9j\") pod \"nova-cell0-9dff-account-create-update-sjzvt\" (UID: \"1d155439-48d6-4b0b-b096-70e2527f2d7f\") " pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.813422 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqw2g\" (UniqueName: \"kubernetes.io/projected/c038eeb5-80d1-4e94-a5de-31b056dac6f7-kube-api-access-wqw2g\") pod \"nova-cell1-db-create-zxk9n\" (UID: \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\") " pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.813463 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c038eeb5-80d1-4e94-a5de-31b056dac6f7-operator-scripts\") pod \"nova-cell1-db-create-zxk9n\" (UID: \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\") " pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.813577 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d155439-48d6-4b0b-b096-70e2527f2d7f-operator-scripts\") pod \"nova-cell0-9dff-account-create-update-sjzvt\" (UID: \"1d155439-48d6-4b0b-b096-70e2527f2d7f\") " pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.817065 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c038eeb5-80d1-4e94-a5de-31b056dac6f7-operator-scripts\") pod \"nova-cell1-db-create-zxk9n\" (UID: \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\") " pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.840432 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqw2g\" (UniqueName: \"kubernetes.io/projected/c038eeb5-80d1-4e94-a5de-31b056dac6f7-kube-api-access-wqw2g\") pod \"nova-cell1-db-create-zxk9n\" (UID: \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\") " pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.845947 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.847373 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e02b-account-create-update-gvd4d"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.848878 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.859209 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e02b-account-create-update-gvd4d"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.900429 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.911734 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.915448 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da7aabd1-3708-46b0-a20d-25fb29aa26ef-operator-scripts\") pod \"nova-cell1-e02b-account-create-update-gvd4d\" (UID: \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\") " pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.915504 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcd9j\" (UniqueName: \"kubernetes.io/projected/1d155439-48d6-4b0b-b096-70e2527f2d7f-kube-api-access-bcd9j\") pod \"nova-cell0-9dff-account-create-update-sjzvt\" (UID: \"1d155439-48d6-4b0b-b096-70e2527f2d7f\") " pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.915602 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwxn\" (UniqueName: \"kubernetes.io/projected/da7aabd1-3708-46b0-a20d-25fb29aa26ef-kube-api-access-jwwxn\") pod \"nova-cell1-e02b-account-create-update-gvd4d\" (UID: \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\") " pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.916122 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d155439-48d6-4b0b-b096-70e2527f2d7f-operator-scripts\") pod \"nova-cell0-9dff-account-create-update-sjzvt\" (UID: \"1d155439-48d6-4b0b-b096-70e2527f2d7f\") " pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.917013 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d155439-48d6-4b0b-b096-70e2527f2d7f-operator-scripts\") pod \"nova-cell0-9dff-account-create-update-sjzvt\" (UID: \"1d155439-48d6-4b0b-b096-70e2527f2d7f\") " pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.951874 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcd9j\" (UniqueName: \"kubernetes.io/projected/1d155439-48d6-4b0b-b096-70e2527f2d7f-kube-api-access-bcd9j\") pod \"nova-cell0-9dff-account-create-update-sjzvt\" (UID: \"1d155439-48d6-4b0b-b096-70e2527f2d7f\") " pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.983872 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 14:15:56 crc kubenswrapper[4750]: I0214 14:15:56.984101 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="046d0778-1fd6-4cfa-b632-4379f20af2b7" containerName="kube-state-metrics" containerID="cri-o://bc619036a0651f2705a33c21806adfd86d8a19832c0d600989d60e67ef051a30" gracePeriod=30 Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.019621 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwwxn\" (UniqueName: \"kubernetes.io/projected/da7aabd1-3708-46b0-a20d-25fb29aa26ef-kube-api-access-jwwxn\") pod \"nova-cell1-e02b-account-create-update-gvd4d\" (UID: \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\") " pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.019808 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da7aabd1-3708-46b0-a20d-25fb29aa26ef-operator-scripts\") pod \"nova-cell1-e02b-account-create-update-gvd4d\" (UID: \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\") " pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.020816 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da7aabd1-3708-46b0-a20d-25fb29aa26ef-operator-scripts\") pod \"nova-cell1-e02b-account-create-update-gvd4d\" (UID: \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\") " pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.043790 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwwxn\" (UniqueName: \"kubernetes.io/projected/da7aabd1-3708-46b0-a20d-25fb29aa26ef-kube-api-access-jwwxn\") pod \"nova-cell1-e02b-account-create-update-gvd4d\" (UID: \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\") " pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.067235 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.067492 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f" containerName="mysqld-exporter" containerID="cri-o://e00105401d705ec8d9977d0c7692f62c2dd77119e897eb1e21ce7300703f8605" gracePeriod=30 Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.234729 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.247439 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.504016 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.531306 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data\") pod \"08e0cb68-c112-4ee0-aa62-68937f86dc89\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.531633 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-scripts\") pod \"08e0cb68-c112-4ee0-aa62-68937f86dc89\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.531654 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-combined-ca-bundle\") pod \"08e0cb68-c112-4ee0-aa62-68937f86dc89\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.531747 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e0cb68-c112-4ee0-aa62-68937f86dc89-logs\") pod \"08e0cb68-c112-4ee0-aa62-68937f86dc89\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.531784 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data-custom\") pod \"08e0cb68-c112-4ee0-aa62-68937f86dc89\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.531813 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08e0cb68-c112-4ee0-aa62-68937f86dc89-etc-machine-id\") pod \"08e0cb68-c112-4ee0-aa62-68937f86dc89\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.531937 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbtkh\" (UniqueName: \"kubernetes.io/projected/08e0cb68-c112-4ee0-aa62-68937f86dc89-kube-api-access-sbtkh\") pod \"08e0cb68-c112-4ee0-aa62-68937f86dc89\" (UID: \"08e0cb68-c112-4ee0-aa62-68937f86dc89\") " Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.536275 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e0cb68-c112-4ee0-aa62-68937f86dc89-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "08e0cb68-c112-4ee0-aa62-68937f86dc89" (UID: "08e0cb68-c112-4ee0-aa62-68937f86dc89"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.536421 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e0cb68-c112-4ee0-aa62-68937f86dc89-logs" (OuterVolumeSpecName: "logs") pod "08e0cb68-c112-4ee0-aa62-68937f86dc89" (UID: "08e0cb68-c112-4ee0-aa62-68937f86dc89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.537773 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e0cb68-c112-4ee0-aa62-68937f86dc89-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.537804 4750 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08e0cb68-c112-4ee0-aa62-68937f86dc89-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.547771 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e0cb68-c112-4ee0-aa62-68937f86dc89-kube-api-access-sbtkh" (OuterVolumeSpecName: "kube-api-access-sbtkh") pod "08e0cb68-c112-4ee0-aa62-68937f86dc89" (UID: "08e0cb68-c112-4ee0-aa62-68937f86dc89"). InnerVolumeSpecName "kube-api-access-sbtkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.548490 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08e0cb68-c112-4ee0-aa62-68937f86dc89" (UID: "08e0cb68-c112-4ee0-aa62-68937f86dc89"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.556332 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-scripts" (OuterVolumeSpecName: "scripts") pod "08e0cb68-c112-4ee0-aa62-68937f86dc89" (UID: "08e0cb68-c112-4ee0-aa62-68937f86dc89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.578558 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b6899f7-wlrpr" event={"ID":"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c","Type":"ContainerStarted","Data":"c1bb3d22d0d28c94346f71c58eba1006c17a714308dda95d8887b99c6161af96"} Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.580542 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08e0cb68-c112-4ee0-aa62-68937f86dc89","Type":"ContainerDied","Data":"a0ca3b184587df34bb341b5adee602458845d53aacf7b0109a7593a3ccbce08a"} Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.580568 4750 scope.go:117] "RemoveContainer" containerID="61535dc5924b201b157107b59a416a8bc47c44fccfbc5842133e24aac042f33e" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.580685 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.589205 4750 generic.go:334] "Generic (PLEG): container finished" podID="046d0778-1fd6-4cfa-b632-4379f20af2b7" containerID="bc619036a0651f2705a33c21806adfd86d8a19832c0d600989d60e67ef051a30" exitCode=2 Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.589272 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"046d0778-1fd6-4cfa-b632-4379f20af2b7","Type":"ContainerDied","Data":"bc619036a0651f2705a33c21806adfd86d8a19832c0d600989d60e67ef051a30"} Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.593811 4750 generic.go:334] "Generic (PLEG): container finished" podID="efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f" containerID="e00105401d705ec8d9977d0c7692f62c2dd77119e897eb1e21ce7300703f8605" exitCode=2 Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.593847 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f","Type":"ContainerDied","Data":"e00105401d705ec8d9977d0c7692f62c2dd77119e897eb1e21ce7300703f8605"} Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.634420 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08e0cb68-c112-4ee0-aa62-68937f86dc89" (UID: "08e0cb68-c112-4ee0-aa62-68937f86dc89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.642217 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbtkh\" (UniqueName: \"kubernetes.io/projected/08e0cb68-c112-4ee0-aa62-68937f86dc89-kube-api-access-sbtkh\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.642450 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.642511 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.642565 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.714352 4750 scope.go:117] "RemoveContainer" containerID="07b3549a303938acfea3f6f404a73861d8104708128b2a713c662b5d1e450fe4" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.788954 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data" (OuterVolumeSpecName: "config-data") pod "08e0cb68-c112-4ee0-aa62-68937f86dc89" (UID: "08e0cb68-c112-4ee0-aa62-68937f86dc89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.873159 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e0cb68-c112-4ee0-aa62-68937f86dc89-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:57 crc kubenswrapper[4750]: I0214 14:15:57.975416 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.047793 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.075725 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-httpd-run\") pod \"9e9f3055-374a-4522-ac07-4f8d55550521\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.075785 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-combined-ca-bundle\") pod \"9e9f3055-374a-4522-ac07-4f8d55550521\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.075822 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lsz4\" (UniqueName: \"kubernetes.io/projected/9e9f3055-374a-4522-ac07-4f8d55550521-kube-api-access-6lsz4\") pod \"9e9f3055-374a-4522-ac07-4f8d55550521\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.076031 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"9e9f3055-374a-4522-ac07-4f8d55550521\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.076074 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-config-data\") pod \"9e9f3055-374a-4522-ac07-4f8d55550521\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.076726 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-logs\") pod \"9e9f3055-374a-4522-ac07-4f8d55550521\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.076889 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-public-tls-certs\") pod \"9e9f3055-374a-4522-ac07-4f8d55550521\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.076921 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-scripts\") pod \"9e9f3055-374a-4522-ac07-4f8d55550521\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.080616 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9e9f3055-374a-4522-ac07-4f8d55550521" (UID: "9e9f3055-374a-4522-ac07-4f8d55550521"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.083220 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-logs" (OuterVolumeSpecName: "logs") pod "9e9f3055-374a-4522-ac07-4f8d55550521" (UID: "9e9f3055-374a-4522-ac07-4f8d55550521"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.105580 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-scripts" (OuterVolumeSpecName: "scripts") pod "9e9f3055-374a-4522-ac07-4f8d55550521" (UID: "9e9f3055-374a-4522-ac07-4f8d55550521"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.114286 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9f3055-374a-4522-ac07-4f8d55550521-kube-api-access-6lsz4" (OuterVolumeSpecName: "kube-api-access-6lsz4") pod "9e9f3055-374a-4522-ac07-4f8d55550521" (UID: "9e9f3055-374a-4522-ac07-4f8d55550521"). InnerVolumeSpecName "kube-api-access-6lsz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.117795 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.131211 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 14:15:58 crc kubenswrapper[4750]: E0214 14:15:58.134684 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9f3055-374a-4522-ac07-4f8d55550521" containerName="glance-log" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.134713 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9f3055-374a-4522-ac07-4f8d55550521" containerName="glance-log" Feb 14 14:15:58 crc kubenswrapper[4750]: E0214 14:15:58.134744 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerName="cinder-api-log" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.134753 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerName="cinder-api-log" Feb 14 14:15:58 crc kubenswrapper[4750]: E0214 14:15:58.134772 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9f3055-374a-4522-ac07-4f8d55550521" containerName="glance-httpd" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.134780 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9f3055-374a-4522-ac07-4f8d55550521" containerName="glance-httpd" Feb 14 14:15:58 crc kubenswrapper[4750]: E0214 14:15:58.134793 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerName="cinder-api" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.134799 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerName="cinder-api" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.135075 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerName="cinder-api" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.135126 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9f3055-374a-4522-ac07-4f8d55550521" containerName="glance-httpd" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.135141 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" containerName="cinder-api-log" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.135157 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9f3055-374a-4522-ac07-4f8d55550521" containerName="glance-log" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.142230 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.156766 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.157177 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.157807 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.162762 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183528 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz6fk\" (UniqueName: \"kubernetes.io/projected/ecffb890-5905-4fc1-a005-86519c0c6aea-kube-api-access-xz6fk\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183563 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183586 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ecffb890-5905-4fc1-a005-86519c0c6aea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183607 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecffb890-5905-4fc1-a005-86519c0c6aea-logs\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183692 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183725 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-config-data\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183753 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183779 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-scripts\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183814 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-config-data-custom\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183899 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lsz4\" (UniqueName: \"kubernetes.io/projected/9e9f3055-374a-4522-ac07-4f8d55550521-kube-api-access-6lsz4\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183911 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183920 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.183928 4750 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e9f3055-374a-4522-ac07-4f8d55550521-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.269560 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9e9f3055-374a-4522-ac07-4f8d55550521" (UID: "9e9f3055-374a-4522-ac07-4f8d55550521"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.287036 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.287379 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-config-data\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.287496 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.287603 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-scripts\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.288559 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-config-data-custom\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.288876 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.289019 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz6fk\" (UniqueName: \"kubernetes.io/projected/ecffb890-5905-4fc1-a005-86519c0c6aea-kube-api-access-xz6fk\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.289151 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ecffb890-5905-4fc1-a005-86519c0c6aea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.289314 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecffb890-5905-4fc1-a005-86519c0c6aea-logs\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.289991 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecffb890-5905-4fc1-a005-86519c0c6aea-logs\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.290536 4750 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.294271 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ecffb890-5905-4fc1-a005-86519c0c6aea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.302228 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.302580 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.312853 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.321990 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-config-data\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.329448 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz6fk\" (UniqueName: \"kubernetes.io/projected/ecffb890-5905-4fc1-a005-86519c0c6aea-kube-api-access-xz6fk\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.341301 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e9f3055-374a-4522-ac07-4f8d55550521" (UID: "9e9f3055-374a-4522-ac07-4f8d55550521"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.341860 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-config-data-custom\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.357579 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecffb890-5905-4fc1-a005-86519c0c6aea-scripts\") pod \"cinder-api-0\" (UID: \"ecffb890-5905-4fc1-a005-86519c0c6aea\") " pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.400673 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: E0214 14:15:58.499883 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe podName:9e9f3055-374a-4522-ac07-4f8d55550521 nodeName:}" failed. No retries permitted until 2026-02-14 14:15:58.999854905 +0000 UTC m=+1431.025844386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "glance" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe") pod "9e9f3055-374a-4522-ac07-4f8d55550521" (UID: "9e9f3055-374a-4522-ac07-4f8d55550521") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.534323 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-config-data" (OuterVolumeSpecName: "config-data") pod "9e9f3055-374a-4522-ac07-4f8d55550521" (UID: "9e9f3055-374a-4522-ac07-4f8d55550521"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.562366 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.589103 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.604174 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xvdx\" (UniqueName: \"kubernetes.io/projected/046d0778-1fd6-4cfa-b632-4379f20af2b7-kube-api-access-7xvdx\") pod \"046d0778-1fd6-4cfa-b632-4379f20af2b7\" (UID: \"046d0778-1fd6-4cfa-b632-4379f20af2b7\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.604982 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.605028 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9f3055-374a-4522-ac07-4f8d55550521-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.627763 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046d0778-1fd6-4cfa-b632-4379f20af2b7-kube-api-access-7xvdx" (OuterVolumeSpecName: "kube-api-access-7xvdx") pod "046d0778-1fd6-4cfa-b632-4379f20af2b7" (UID: "046d0778-1fd6-4cfa-b632-4379f20af2b7"). InnerVolumeSpecName "kube-api-access-7xvdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.641044 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.709359 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f","Type":"ContainerDied","Data":"06538ee2dc5b6dae685e7fee74aa0d771443e642b8645f6765c991abfb713b78"} Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.709409 4750 scope.go:117] "RemoveContainer" containerID="e00105401d705ec8d9977d0c7692f62c2dd77119e897eb1e21ce7300703f8605" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.709528 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.711724 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvvbq\" (UniqueName: \"kubernetes.io/projected/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-kube-api-access-lvvbq\") pod \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.711900 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7n48\" (UniqueName: \"kubernetes.io/projected/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-kube-api-access-v7n48\") pod \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712036 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712073 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-scripts\") pod \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712163 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-combined-ca-bundle\") pod \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712200 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-logs\") pod \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712225 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-httpd-run\") pod \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712249 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-combined-ca-bundle\") pod \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712336 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-config-data\") pod \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\" (UID: \"efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712368 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-internal-tls-certs\") pod \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712392 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-config-data\") pod \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.712957 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xvdx\" (UniqueName: \"kubernetes.io/projected/046d0778-1fd6-4cfa-b632-4379f20af2b7-kube-api-access-7xvdx\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.729764 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-logs" (OuterVolumeSpecName: "logs") pod "98f056b9-dcdd-4ef7-99b8-51e5eca665d8" (UID: "98f056b9-dcdd-4ef7-99b8-51e5eca665d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.736418 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-kube-api-access-v7n48" (OuterVolumeSpecName: "kube-api-access-v7n48") pod "98f056b9-dcdd-4ef7-99b8-51e5eca665d8" (UID: "98f056b9-dcdd-4ef7-99b8-51e5eca665d8"). InnerVolumeSpecName "kube-api-access-v7n48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.738817 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "98f056b9-dcdd-4ef7-99b8-51e5eca665d8" (UID: "98f056b9-dcdd-4ef7-99b8-51e5eca665d8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.760304 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-scripts" (OuterVolumeSpecName: "scripts") pod "98f056b9-dcdd-4ef7-99b8-51e5eca665d8" (UID: "98f056b9-dcdd-4ef7-99b8-51e5eca665d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.778077 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: E0214 14:15:58.788802 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b podName:98f056b9-dcdd-4ef7-99b8-51e5eca665d8 nodeName:}" failed. No retries permitted until 2026-02-14 14:15:59.288774028 +0000 UTC m=+1431.314763509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "glance" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b") pod "98f056b9-dcdd-4ef7-99b8-51e5eca665d8" (UID: "98f056b9-dcdd-4ef7-99b8-51e5eca665d8") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.802383 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-kube-api-access-lvvbq" (OuterVolumeSpecName: "kube-api-access-lvvbq") pod "efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f" (UID: "efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f"). InnerVolumeSpecName "kube-api-access-lvvbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.807716 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98f056b9-dcdd-4ef7-99b8-51e5eca665d8" (UID: "98f056b9-dcdd-4ef7-99b8-51e5eca665d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.808205 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.810667 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e0cb68-c112-4ee0-aa62-68937f86dc89" path="/var/lib/kubelet/pods/08e0cb68-c112-4ee0-aa62-68937f86dc89/volumes" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.817907 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.818390 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.821669 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.821692 4750 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.821701 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvvbq\" (UniqueName: \"kubernetes.io/projected/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-kube-api-access-lvvbq\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.821713 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7n48\" (UniqueName: \"kubernetes.io/projected/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-kube-api-access-v7n48\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.821722 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.861153 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f" (UID: "efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.907886 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-config-data" (OuterVolumeSpecName: "config-data") pod "98f056b9-dcdd-4ef7-99b8-51e5eca665d8" (UID: "98f056b9-dcdd-4ef7-99b8-51e5eca665d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.914999 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.743090816 podStartE2EDuration="14.91497906s" podCreationTimestamp="2026-02-14 14:15:44 +0000 UTC" firstStartedPulling="2026-02-14 14:15:45.194049206 +0000 UTC m=+1417.220038717" lastFinishedPulling="2026-02-14 14:15:57.36593748 +0000 UTC m=+1429.391926961" observedRunningTime="2026-02-14 14:15:58.908977009 +0000 UTC m=+1430.934966490" watchObservedRunningTime="2026-02-14 14:15:58.91497906 +0000 UTC m=+1430.940968541" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.925039 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.925067 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.930315 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-config-data" (OuterVolumeSpecName: "config-data") pod "efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f" (UID: "efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:58 crc kubenswrapper[4750]: I0214 14:15:58.971545 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "98f056b9-dcdd-4ef7-99b8-51e5eca665d8" (UID: "98f056b9-dcdd-4ef7-99b8-51e5eca665d8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.015401 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"98f056b9-dcdd-4ef7-99b8-51e5eca665d8","Type":"ContainerDied","Data":"de1308b40e2b1abd7185b225527685f9735b868b8e20ee888d4636f72480a2af"} Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.015688 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b6899f7-wlrpr" event={"ID":"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c","Type":"ContainerStarted","Data":"d05dcb8e980242daddcdaa158a11ed19262ebe4ef11b9aebb715793b69a72397"} Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.015703 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j27zc"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.015719 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5d9cc9c1-5726-4507-bc40-a27f3aee83c4","Type":"ContainerStarted","Data":"ca05ba2af1abdd5c27af8669dcff57bf5f210da70ba1ff7df8afdc910363ad74"} Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.015732 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e9f3055-374a-4522-ac07-4f8d55550521","Type":"ContainerDied","Data":"c6dacb976bf04a10c146349933601e19837180261dfea64b5e4ab804c262da5b"} Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.015743 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"046d0778-1fd6-4cfa-b632-4379f20af2b7","Type":"ContainerDied","Data":"d625b55be84eb41376b4f62e7c28c12cd7ecda03cdb5ebb795534da9ce676cec"} Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.023908 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lm7k9"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.026094 4750 scope.go:117] "RemoveContainer" containerID="56e7c5366cb961c2a611d76be1397150ba751cb3ec931f4b0530edd82af5b192" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.028728 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"9e9f3055-374a-4522-ac07-4f8d55550521\" (UID: \"9e9f3055-374a-4522-ac07-4f8d55550521\") " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.035539 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.035605 4750 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f056b9-dcdd-4ef7-99b8-51e5eca665d8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.036820 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.046615 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.061555 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067213 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: E0214 14:15:59.067638 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046d0778-1fd6-4cfa-b632-4379f20af2b7" containerName="kube-state-metrics" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067655 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="046d0778-1fd6-4cfa-b632-4379f20af2b7" containerName="kube-state-metrics" Feb 14 14:15:59 crc kubenswrapper[4750]: E0214 14:15:59.067670 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f" containerName="mysqld-exporter" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067677 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f" containerName="mysqld-exporter" Feb 14 14:15:59 crc kubenswrapper[4750]: E0214 14:15:59.067689 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerName="glance-httpd" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067695 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerName="glance-httpd" Feb 14 14:15:59 crc kubenswrapper[4750]: E0214 14:15:59.067705 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="ceilometer-notification-agent" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067711 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="ceilometer-notification-agent" Feb 14 14:15:59 crc kubenswrapper[4750]: E0214 14:15:59.067725 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="proxy-httpd" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067730 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="proxy-httpd" Feb 14 14:15:59 crc kubenswrapper[4750]: E0214 14:15:59.067751 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="ceilometer-central-agent" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067757 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="ceilometer-central-agent" Feb 14 14:15:59 crc kubenswrapper[4750]: E0214 14:15:59.067769 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerName="glance-log" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067774 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerName="glance-log" Feb 14 14:15:59 crc kubenswrapper[4750]: E0214 14:15:59.067784 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="sg-core" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067789 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="sg-core" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067976 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerName="glance-log" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.067998 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="sg-core" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.068013 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f" containerName="mysqld-exporter" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.068021 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="ceilometer-notification-agent" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.068029 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" containerName="glance-httpd" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.068035 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="ceilometer-central-agent" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.068049 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="046d0778-1fd6-4cfa-b632-4379f20af2b7" containerName="kube-state-metrics" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.068056 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" containerName="proxy-httpd" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.068745 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.071638 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.071767 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.108526 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.109398 4750 scope.go:117] "RemoveContainer" containerID="24931750f8871d87006403489c140a3a45c0838df9d567df9bd51dd32cb9f494" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.110410 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe" (OuterVolumeSpecName: "glance") pod "9e9f3055-374a-4522-ac07-4f8d55550521" (UID: "9e9f3055-374a-4522-ac07-4f8d55550521"). InnerVolumeSpecName "pvc-27549279-a7a4-4066-b282-15513a49b9fe". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.136879 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-combined-ca-bundle\") pod \"6d1aa060-572b-4488-8cc5-ac1b792e2464\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.136937 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-run-httpd\") pod \"6d1aa060-572b-4488-8cc5-ac1b792e2464\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137008 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-sg-core-conf-yaml\") pod \"6d1aa060-572b-4488-8cc5-ac1b792e2464\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137022 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-scripts\") pod \"6d1aa060-572b-4488-8cc5-ac1b792e2464\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137241 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-log-httpd\") pod \"6d1aa060-572b-4488-8cc5-ac1b792e2464\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137264 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-config-data\") pod \"6d1aa060-572b-4488-8cc5-ac1b792e2464\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137309 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln2h4\" (UniqueName: \"kubernetes.io/projected/6d1aa060-572b-4488-8cc5-ac1b792e2464-kube-api-access-ln2h4\") pod \"6d1aa060-572b-4488-8cc5-ac1b792e2464\" (UID: \"6d1aa060-572b-4488-8cc5-ac1b792e2464\") " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137681 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6d1aa060-572b-4488-8cc5-ac1b792e2464" (UID: "6d1aa060-572b-4488-8cc5-ac1b792e2464"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137705 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct77k\" (UniqueName: \"kubernetes.io/projected/98683c54-5137-4357-be49-f22cdf9715db-kube-api-access-ct77k\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137739 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98683c54-5137-4357-be49-f22cdf9715db-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137772 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/98683c54-5137-4357-be49-f22cdf9715db-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137800 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/98683c54-5137-4357-be49-f22cdf9715db-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137847 4750 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.137868 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") on node \"crc\" " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.152510 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6d1aa060-572b-4488-8cc5-ac1b792e2464" (UID: "6d1aa060-572b-4488-8cc5-ac1b792e2464"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.160630 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-scripts" (OuterVolumeSpecName: "scripts") pod "6d1aa060-572b-4488-8cc5-ac1b792e2464" (UID: "6d1aa060-572b-4488-8cc5-ac1b792e2464"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.172233 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d1aa060-572b-4488-8cc5-ac1b792e2464-kube-api-access-ln2h4" (OuterVolumeSpecName: "kube-api-access-ln2h4") pod "6d1aa060-572b-4488-8cc5-ac1b792e2464" (UID: "6d1aa060-572b-4488-8cc5-ac1b792e2464"). InnerVolumeSpecName "kube-api-access-ln2h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.185512 4750 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.185650 4750 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-27549279-a7a4-4066-b282-15513a49b9fe" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe") on node "crc" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.194408 4750 scope.go:117] "RemoveContainer" containerID="dc95b3e8dedffcd3a1d7f033d7f682e5cc019910dd3996c0c865a43945ed6df9" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.241546 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct77k\" (UniqueName: \"kubernetes.io/projected/98683c54-5137-4357-be49-f22cdf9715db-kube-api-access-ct77k\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.241606 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98683c54-5137-4357-be49-f22cdf9715db-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.241644 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/98683c54-5137-4357-be49-f22cdf9715db-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.241680 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/98683c54-5137-4357-be49-f22cdf9715db-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.241819 4750 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1aa060-572b-4488-8cc5-ac1b792e2464-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.241847 4750 reconciler_common.go:293] "Volume detached for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.241858 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln2h4\" (UniqueName: \"kubernetes.io/projected/6d1aa060-572b-4488-8cc5-ac1b792e2464-kube-api-access-ln2h4\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.241868 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.248817 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/98683c54-5137-4357-be49-f22cdf9715db-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.249618 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/98683c54-5137-4357-be49-f22cdf9715db-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.250182 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.255077 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98683c54-5137-4357-be49-f22cdf9715db-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.267580 4750 scope.go:117] "RemoveContainer" containerID="bf6c6268f246dc15845d8dcf1509ad49e2366bf737afe42c69193cbe3c3c7e44" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.285746 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct77k\" (UniqueName: \"kubernetes.io/projected/98683c54-5137-4357-be49-f22cdf9715db-kube-api-access-ct77k\") pod \"kube-state-metrics-0\" (UID: \"98683c54-5137-4357-be49-f22cdf9715db\") " pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.285821 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.322178 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.329472 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.338736 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.338959 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.345434 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\" (UID: \"98f056b9-dcdd-4ef7-99b8-51e5eca665d8\") " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.358193 4750 scope.go:117] "RemoveContainer" containerID="bc619036a0651f2705a33c21806adfd86d8a19832c0d600989d60e67ef051a30" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.365834 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.410444 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.424825 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.443172 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.448194 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-config-data\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.448246 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.448286 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmppl\" (UniqueName: \"kubernetes.io/projected/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-kube-api-access-hmppl\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.448370 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.465639 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.467547 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.470274 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.470548 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.490288 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.502555 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b" (OuterVolumeSpecName: "glance") pod "98f056b9-dcdd-4ef7-99b8-51e5eca665d8" (UID: "98f056b9-dcdd-4ef7-99b8-51e5eca665d8"). InnerVolumeSpecName "pvc-f8aa6b2a-87f7-4392-adda-43640f80683b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.589856 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-config-data\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.589925 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.589965 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmppl\" (UniqueName: \"kubernetes.io/projected/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-kube-api-access-hmppl\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590042 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590079 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590105 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590179 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590206 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590231 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe85d9dd-19fc-4155-af2c-62cc62eb029c-logs\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590265 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590321 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltl5k\" (UniqueName: \"kubernetes.io/projected/fe85d9dd-19fc-4155-af2c-62cc62eb029c-kube-api-access-ltl5k\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590356 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe85d9dd-19fc-4155-af2c-62cc62eb029c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.590488 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") on node \"crc\" " Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.621205 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6d1aa060-572b-4488-8cc5-ac1b792e2464" (UID: "6d1aa060-572b-4488-8cc5-ac1b792e2464"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.660133 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.662840 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-config-data\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.668360 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.675780 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmppl\" (UniqueName: \"kubernetes.io/projected/1aaa3ab6-4225-441d-b5b9-85ec8d30ca01-kube-api-access-hmppl\") pod \"mysqld-exporter-0\" (UID: \"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01\") " pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.739642 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.745228 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.745302 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.745386 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.745413 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.745431 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe85d9dd-19fc-4155-af2c-62cc62eb029c-logs\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.745482 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.745560 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltl5k\" (UniqueName: \"kubernetes.io/projected/fe85d9dd-19fc-4155-af2c-62cc62eb029c-kube-api-access-ltl5k\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.745588 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe85d9dd-19fc-4155-af2c-62cc62eb029c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.745740 4750 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.746278 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe85d9dd-19fc-4155-af2c-62cc62eb029c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.749868 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe85d9dd-19fc-4155-af2c-62cc62eb029c-logs\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.757715 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.762514 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.762651 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/01be76f23f3db65527afd99bd3e4627b46df2a8504625bc2942280577a76cd42/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.763102 4750 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.763422 4750 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f8aa6b2a-87f7-4392-adda-43640f80683b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b") on node "crc" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.771124 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.773383 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.775491 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9dff-account-create-update-sjzvt"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.779719 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe85d9dd-19fc-4155-af2c-62cc62eb029c-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.806015 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d1aa060-572b-4488-8cc5-ac1b792e2464" (UID: "6d1aa060-572b-4488-8cc5-ac1b792e2464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.809544 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltl5k\" (UniqueName: \"kubernetes.io/projected/fe85d9dd-19fc-4155-af2c-62cc62eb029c-kube-api-access-ltl5k\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.827716 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-config-data" (OuterVolumeSpecName: "config-data") pod "6d1aa060-572b-4488-8cc5-ac1b792e2464" (UID: "6d1aa060-572b-4488-8cc5-ac1b792e2464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.827779 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zxk9n"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.837053 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e02b-account-create-update-gvd4d"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.849207 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.849237 4750 reconciler_common.go:293] "Volume detached for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.849248 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1aa060-572b-4488-8cc5-ac1b792e2464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.860760 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8f8b-account-create-update-8qtk7"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.872783 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" event={"ID":"da7aabd1-3708-46b0-a20d-25fb29aa26ef","Type":"ContainerStarted","Data":"0f25203ea71d12dbf6499e57525928f3035be9818aed0272d2465c1e261cb6cf"} Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.874025 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f8b-account-create-update-8qtk7" event={"ID":"075e5fd7-94c0-48b5-9c6d-a7994c4479f7","Type":"ContainerStarted","Data":"79eb9885a3b84636316260344fb20ba91053ec82f5e93661e2ec135e9ae055f5"} Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.874911 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.876691 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1aa060-572b-4488-8cc5-ac1b792e2464","Type":"ContainerDied","Data":"6800324bd52ca53e23a841e570ddde07bd76bc9ab0151a9d14b26caf61b2eda0"} Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.876727 4750 scope.go:117] "RemoveContainer" containerID="2091347404b7b0e4d10d6668b3463a36038b729c93414003c29b8aa92c32010d" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.876776 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.889929 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" event={"ID":"1d155439-48d6-4b0b-b096-70e2527f2d7f","Type":"ContainerStarted","Data":"275eefc069f5aea05c662c1050706af645cc0e4676c787339ea79ff26ffb7a83"} Feb 14 14:15:59 crc kubenswrapper[4750]: I0214 14:15:59.940022 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zxk9n" event={"ID":"c038eeb5-80d1-4e94-a5de-31b056dac6f7","Type":"ContainerStarted","Data":"fc3ac522eeababf156563a05f50233d388a335ed3e7eef21efd7816dfe343d78"} Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:15:59.978223 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j27zc" event={"ID":"55c02c14-a300-466f-b53c-8f7a96be136a","Type":"ContainerStarted","Data":"ca67e9c8c7e50b7ccf7685189776accd42b11d6da89818512275123bcbb52ded"} Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:15:59.978480 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j27zc" event={"ID":"55c02c14-a300-466f-b53c-8f7a96be136a","Type":"ContainerStarted","Data":"054e15e353de1b3c2c621bb4007658a77536d407344ae849e202c6e19db897c8"} Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:15:59.979248 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-27549279-a7a4-4066-b282-15513a49b9fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27549279-a7a4-4066-b282-15513a49b9fe\") pod \"glance-default-external-api-0\" (UID: \"fe85d9dd-19fc-4155-af2c-62cc62eb029c\") " pod="openstack/glance-default-external-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:15:59.983862 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lm7k9" event={"ID":"e32c67d8-34c3-4340-b584-704ba0540dc7","Type":"ContainerStarted","Data":"5d9685a5f2ddfcacdc38563153f196d1779f7d1ea51fa422ea1b5715e3277647"} Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:15:59.984027 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lm7k9" event={"ID":"e32c67d8-34c3-4340-b584-704ba0540dc7","Type":"ContainerStarted","Data":"ce8a22d398db31cd01aef76c6753a0d4d5fe9bab90747530f09e86616af57b8d"} Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:15:59.994758 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-j27zc" podStartSLOduration=3.9947403120000002 podStartE2EDuration="3.994740312s" podCreationTimestamp="2026-02-14 14:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:15:59.991015366 +0000 UTC m=+1432.017004847" watchObservedRunningTime="2026-02-14 14:15:59.994740312 +0000 UTC m=+1432.020729793" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.006585 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b6899f7-wlrpr" event={"ID":"1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c","Type":"ContainerStarted","Data":"c10feae061cf5b1cf4aa5410ce858262fc9161e20270aff3e28dfbc5e0dc71dd"} Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.006628 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.006641 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.017006 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-lm7k9" podStartSLOduration=4.016986495 podStartE2EDuration="4.016986495s" podCreationTimestamp="2026-02-14 14:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:00.011689224 +0000 UTC m=+1432.037678715" watchObservedRunningTime="2026-02-14 14:16:00.016986495 +0000 UTC m=+1432.042975976" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.071256 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85b6899f7-wlrpr" podStartSLOduration=12.071237329 podStartE2EDuration="12.071237329s" podCreationTimestamp="2026-02-14 14:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:00.034688219 +0000 UTC m=+1432.060677700" watchObservedRunningTime="2026-02-14 14:16:00.071237329 +0000 UTC m=+1432.097226810" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.102432 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.154317 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.266553 4750 scope.go:117] "RemoveContainer" containerID="c7cabaf625f34638066ca7860ea67aa8702cf00a29c87d2efabca4af62eb8b22" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.310791 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.346555 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.353318 4750 scope.go:117] "RemoveContainer" containerID="6328e321c5128238b62cd61170d1cc699371c100f8abfd71bca33d6dadc9dc11" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.386013 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.402343 4750 scope.go:117] "RemoveContainer" containerID="4c558ea25893dfab1452765a2645380852a9815b1365a4fdb35e27ecdc0ff60f" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.411316 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.436642 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.442955 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.456644 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.456843 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.462196 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.483083 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.487214 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.490927 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.491168 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.498498 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.515318 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 14:16:00 crc kubenswrapper[4750]: W0214 14:16:00.532218 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aaa3ab6_4225_441d_b5b9_85ec8d30ca01.slice/crio-a1e88ed672465aa7eb743e2f701242c2515b7153743b421ac9bdb02ae20c6e6b WatchSource:0}: Error finding container a1e88ed672465aa7eb743e2f701242c2515b7153743b421ac9bdb02ae20c6e6b: Status 404 returned error can't find the container with id a1e88ed672465aa7eb743e2f701242c2515b7153743b421ac9bdb02ae20c6e6b Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.596839 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.596915 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.596980 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597007 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-config-data\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597058 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597129 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2rdz\" (UniqueName: \"kubernetes.io/projected/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-kube-api-access-b2rdz\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597179 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597218 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597353 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597380 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-log-httpd\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597441 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-run-httpd\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597475 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597549 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kc95\" (UniqueName: \"kubernetes.io/projected/91a9248d-fe82-42eb-8b98-cc1812b56dc7-kube-api-access-7kc95\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597595 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-logs\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.597660 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-scripts\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.699601 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.699931 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.699969 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.699991 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-config-data\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700032 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700066 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2rdz\" (UniqueName: \"kubernetes.io/projected/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-kube-api-access-b2rdz\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700089 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700135 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700211 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700230 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-log-httpd\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700264 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-run-httpd\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700287 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700317 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kc95\" (UniqueName: \"kubernetes.io/projected/91a9248d-fe82-42eb-8b98-cc1812b56dc7-kube-api-access-7kc95\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700344 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-logs\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.700381 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-scripts\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.702756 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.704623 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-run-httpd\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.705905 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-scripts\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.711158 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.711392 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-log-httpd\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.712587 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-logs\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.715693 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.717387 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.720519 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.720895 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.734153 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-config-data\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.738796 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.739694 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2rdz\" (UniqueName: \"kubernetes.io/projected/27ce82d4-dcbe-48fe-8b91-8704ef172bf1-kube-api-access-b2rdz\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.751755 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kc95\" (UniqueName: \"kubernetes.io/projected/91a9248d-fe82-42eb-8b98-cc1812b56dc7-kube-api-access-7kc95\") pod \"ceilometer-0\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.760676 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.760712 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4116c04a4908c386919762e10ff51e59035632215c786316944928532f707927/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.780602 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046d0778-1fd6-4cfa-b632-4379f20af2b7" path="/var/lib/kubelet/pods/046d0778-1fd6-4cfa-b632-4379f20af2b7/volumes" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.781687 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d1aa060-572b-4488-8cc5-ac1b792e2464" path="/var/lib/kubelet/pods/6d1aa060-572b-4488-8cc5-ac1b792e2464/volumes" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.782473 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f056b9-dcdd-4ef7-99b8-51e5eca665d8" path="/var/lib/kubelet/pods/98f056b9-dcdd-4ef7-99b8-51e5eca665d8/volumes" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.798432 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9f3055-374a-4522-ac07-4f8d55550521" path="/var/lib/kubelet/pods/9e9f3055-374a-4522-ac07-4f8d55550521/volumes" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.799077 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f" path="/var/lib/kubelet/pods/efe4bd73-1ac8-4b3e-8ad0-fdd9439c4d4f/volumes" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.812660 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.921324 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8aa6b2a-87f7-4392-adda-43640f80683b\") pod \"glance-default-internal-api-0\" (UID: \"27ce82d4-dcbe-48fe-8b91-8704ef172bf1\") " pod="openstack/glance-default-internal-api-0" Feb 14 14:16:00 crc kubenswrapper[4750]: I0214 14:16:00.973205 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.083818 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.098845 4750 generic.go:334] "Generic (PLEG): container finished" podID="e32c67d8-34c3-4340-b584-704ba0540dc7" containerID="5d9685a5f2ddfcacdc38563153f196d1779f7d1ea51fa422ea1b5715e3277647" exitCode=0 Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.098936 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lm7k9" event={"ID":"e32c67d8-34c3-4340-b584-704ba0540dc7","Type":"ContainerDied","Data":"5d9685a5f2ddfcacdc38563153f196d1779f7d1ea51fa422ea1b5715e3277647"} Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.115518 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" event={"ID":"da7aabd1-3708-46b0-a20d-25fb29aa26ef","Type":"ContainerStarted","Data":"a67f4c0bf20f873aeaaaeacb4e67be31502ae99c7333e53a6c3c4845b5c1df9c"} Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.119954 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zxk9n" event={"ID":"c038eeb5-80d1-4e94-a5de-31b056dac6f7","Type":"ContainerStarted","Data":"0796b0757cbee597c4d0dddf5b2fc2c1e90e21d75e7c9427b8d2ceb6561ab6bb"} Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.136645 4750 generic.go:334] "Generic (PLEG): container finished" podID="55c02c14-a300-466f-b53c-8f7a96be136a" containerID="ca67e9c8c7e50b7ccf7685189776accd42b11d6da89818512275123bcbb52ded" exitCode=0 Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.136704 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j27zc" event={"ID":"55c02c14-a300-466f-b53c-8f7a96be136a","Type":"ContainerDied","Data":"ca67e9c8c7e50b7ccf7685189776accd42b11d6da89818512275123bcbb52ded"} Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.159417 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"98683c54-5137-4357-be49-f22cdf9715db","Type":"ContainerStarted","Data":"fc93020d5d7469d6d188967078aca1fed86339b37bf249352a31271024d54d45"} Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.185914 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ecffb890-5905-4fc1-a005-86519c0c6aea","Type":"ContainerStarted","Data":"6f37df577417775c3c87624ac8f830b6d4c03b5de79868fbecf8444ce3996019"} Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.216097 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01","Type":"ContainerStarted","Data":"a1e88ed672465aa7eb743e2f701242c2515b7153743b421ac9bdb02ae20c6e6b"} Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.238787 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" podStartSLOduration=5.23876945 podStartE2EDuration="5.23876945s" podCreationTimestamp="2026-02-14 14:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:01.135646635 +0000 UTC m=+1433.161636116" watchObservedRunningTime="2026-02-14 14:16:01.23876945 +0000 UTC m=+1433.264758931" Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.252229 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-zxk9n" podStartSLOduration=5.252214343 podStartE2EDuration="5.252214343s" podCreationTimestamp="2026-02-14 14:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:01.171442464 +0000 UTC m=+1433.197431945" watchObservedRunningTime="2026-02-14 14:16:01.252214343 +0000 UTC m=+1433.278203824" Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.274818 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f8b-account-create-update-8qtk7" event={"ID":"075e5fd7-94c0-48b5-9c6d-a7994c4479f7","Type":"ContainerStarted","Data":"8424770a65d2fd4a8710e640c1faf114950897a277c4dbf948cf1435c33ff81f"} Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.297037 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" event={"ID":"1d155439-48d6-4b0b-b096-70e2527f2d7f","Type":"ContainerStarted","Data":"f1e9bdbacff15ea4d132835b29bff39c95dabcea5765aaf6edde4ebc1334bb8e"} Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.394745 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" podStartSLOduration=5.394725939 podStartE2EDuration="5.394725939s" podCreationTimestamp="2026-02-14 14:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:01.328564566 +0000 UTC m=+1433.354554047" watchObservedRunningTime="2026-02-14 14:16:01.394725939 +0000 UTC m=+1433.420715430" Feb 14 14:16:01 crc kubenswrapper[4750]: I0214 14:16:01.719228 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.045989 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.086666 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.196434 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-config\") pod \"925a32d5-f7e2-4339-84ea-a5011abb19f1\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.196629 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-combined-ca-bundle\") pod \"925a32d5-f7e2-4339-84ea-a5011abb19f1\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.196768 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-httpd-config\") pod \"925a32d5-f7e2-4339-84ea-a5011abb19f1\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.196823 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dnd4\" (UniqueName: \"kubernetes.io/projected/925a32d5-f7e2-4339-84ea-a5011abb19f1-kube-api-access-7dnd4\") pod \"925a32d5-f7e2-4339-84ea-a5011abb19f1\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.196904 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-ovndb-tls-certs\") pod \"925a32d5-f7e2-4339-84ea-a5011abb19f1\" (UID: \"925a32d5-f7e2-4339-84ea-a5011abb19f1\") " Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.202623 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925a32d5-f7e2-4339-84ea-a5011abb19f1-kube-api-access-7dnd4" (OuterVolumeSpecName: "kube-api-access-7dnd4") pod "925a32d5-f7e2-4339-84ea-a5011abb19f1" (UID: "925a32d5-f7e2-4339-84ea-a5011abb19f1"). InnerVolumeSpecName "kube-api-access-7dnd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.208717 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "925a32d5-f7e2-4339-84ea-a5011abb19f1" (UID: "925a32d5-f7e2-4339-84ea-a5011abb19f1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.221101 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.286558 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-config" (OuterVolumeSpecName: "config") pod "925a32d5-f7e2-4339-84ea-a5011abb19f1" (UID: "925a32d5-f7e2-4339-84ea-a5011abb19f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.299337 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.299366 4750 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.299376 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dnd4\" (UniqueName: \"kubernetes.io/projected/925a32d5-f7e2-4339-84ea-a5011abb19f1-kube-api-access-7dnd4\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.311077 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "925a32d5-f7e2-4339-84ea-a5011abb19f1" (UID: "925a32d5-f7e2-4339-84ea-a5011abb19f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.322747 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ecffb890-5905-4fc1-a005-86519c0c6aea","Type":"ContainerStarted","Data":"0ea56dabf611eced9d14b958ffbb3931c5d86eac17184d1bcfc1d9c939407e70"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.323917 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "925a32d5-f7e2-4339-84ea-a5011abb19f1" (UID: "925a32d5-f7e2-4339-84ea-a5011abb19f1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.327030 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1aaa3ab6-4225-441d-b5b9-85ec8d30ca01","Type":"ContainerStarted","Data":"71d762ced4b2ba19e89adc1bb3e50a1c57582280e8891f9ea67b56ae5dd3ca32"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.334275 4750 generic.go:334] "Generic (PLEG): container finished" podID="075e5fd7-94c0-48b5-9c6d-a7994c4479f7" containerID="8424770a65d2fd4a8710e640c1faf114950897a277c4dbf948cf1435c33ff81f" exitCode=0 Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.334370 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f8b-account-create-update-8qtk7" event={"ID":"075e5fd7-94c0-48b5-9c6d-a7994c4479f7","Type":"ContainerDied","Data":"8424770a65d2fd4a8710e640c1faf114950897a277c4dbf948cf1435c33ff81f"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.337929 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe85d9dd-19fc-4155-af2c-62cc62eb029c","Type":"ContainerStarted","Data":"e88e633b869b7e74edfedc46bdf8e153a4a45e0739d1d09cc05c9c15801a5762"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.337992 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe85d9dd-19fc-4155-af2c-62cc62eb029c","Type":"ContainerStarted","Data":"d99080197206e3e5fb2165a365be301d1bf3ce2435d269837e800f7bcc9a8b49"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.352390 4750 generic.go:334] "Generic (PLEG): container finished" podID="c038eeb5-80d1-4e94-a5de-31b056dac6f7" containerID="0796b0757cbee597c4d0dddf5b2fc2c1e90e21d75e7c9427b8d2ceb6561ab6bb" exitCode=0 Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.352491 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zxk9n" event={"ID":"c038eeb5-80d1-4e94-a5de-31b056dac6f7","Type":"ContainerDied","Data":"0796b0757cbee597c4d0dddf5b2fc2c1e90e21d75e7c9427b8d2ceb6561ab6bb"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.352805 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.514968652 podStartE2EDuration="3.352761148s" podCreationTimestamp="2026-02-14 14:15:59 +0000 UTC" firstStartedPulling="2026-02-14 14:16:00.559191758 +0000 UTC m=+1432.585181239" lastFinishedPulling="2026-02-14 14:16:01.396984254 +0000 UTC m=+1433.422973735" observedRunningTime="2026-02-14 14:16:02.352548322 +0000 UTC m=+1434.378537803" watchObservedRunningTime="2026-02-14 14:16:02.352761148 +0000 UTC m=+1434.378750629" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.355590 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"98683c54-5137-4357-be49-f22cdf9715db","Type":"ContainerStarted","Data":"41510a4e3bdf475b51993bab6f9553af7564dcda12f469cfcfa64c888e0ced4d"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.356007 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.362397 4750 generic.go:334] "Generic (PLEG): container finished" podID="da7aabd1-3708-46b0-a20d-25fb29aa26ef" containerID="a67f4c0bf20f873aeaaaeacb4e67be31502ae99c7333e53a6c3c4845b5c1df9c" exitCode=0 Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.362478 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" event={"ID":"da7aabd1-3708-46b0-a20d-25fb29aa26ef","Type":"ContainerDied","Data":"a67f4c0bf20f873aeaaaeacb4e67be31502ae99c7333e53a6c3c4845b5c1df9c"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.403876 4750 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.404108 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/925a32d5-f7e2-4339-84ea-a5011abb19f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.412610 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"27ce82d4-dcbe-48fe-8b91-8704ef172bf1","Type":"ContainerStarted","Data":"35d2011cb79f998b4b647a23626d2b257ff6bcff0cbf150fcbea7f502e11c1b3"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.416974 4750 generic.go:334] "Generic (PLEG): container finished" podID="1d155439-48d6-4b0b-b096-70e2527f2d7f" containerID="f1e9bdbacff15ea4d132835b29bff39c95dabcea5765aaf6edde4ebc1334bb8e" exitCode=0 Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.417032 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" event={"ID":"1d155439-48d6-4b0b-b096-70e2527f2d7f","Type":"ContainerDied","Data":"f1e9bdbacff15ea4d132835b29bff39c95dabcea5765aaf6edde4ebc1334bb8e"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.441644 4750 generic.go:334] "Generic (PLEG): container finished" podID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerID="f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee" exitCode=0 Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.441757 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-789b68c888-x8zqh" event={"ID":"925a32d5-f7e2-4339-84ea-a5011abb19f1","Type":"ContainerDied","Data":"f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.441787 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-789b68c888-x8zqh" event={"ID":"925a32d5-f7e2-4339-84ea-a5011abb19f1","Type":"ContainerDied","Data":"5e6df20013daa610569e80ab673223f684ab4fdf6f09ed2b9d71bb30331de28a"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.441815 4750 scope.go:117] "RemoveContainer" containerID="682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.442275 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-789b68c888-x8zqh" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.454293 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a9248d-fe82-42eb-8b98-cc1812b56dc7","Type":"ContainerStarted","Data":"68383926e3889e5bbfe71e22e4b17f2fbaa60d2341e4c68af9509311af679834"} Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.459503 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.929582333 podStartE2EDuration="3.459473135s" podCreationTimestamp="2026-02-14 14:15:59 +0000 UTC" firstStartedPulling="2026-02-14 14:16:00.139104811 +0000 UTC m=+1432.165094292" lastFinishedPulling="2026-02-14 14:16:00.668995613 +0000 UTC m=+1432.694985094" observedRunningTime="2026-02-14 14:16:02.386646782 +0000 UTC m=+1434.412636273" watchObservedRunningTime="2026-02-14 14:16:02.459473135 +0000 UTC m=+1434.485462606" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.543503 4750 scope.go:117] "RemoveContainer" containerID="f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.572464 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-789b68c888-x8zqh"] Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.594350 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-789b68c888-x8zqh"] Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.613578 4750 scope.go:117] "RemoveContainer" containerID="682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab" Feb 14 14:16:02 crc kubenswrapper[4750]: E0214 14:16:02.617231 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab\": container with ID starting with 682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab not found: ID does not exist" containerID="682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.617272 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab"} err="failed to get container status \"682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab\": rpc error: code = NotFound desc = could not find container \"682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab\": container with ID starting with 682ac88a9db3600f41d19ca30a00a6609771d8a3cf7aacb121e78a453f3f25ab not found: ID does not exist" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.617296 4750 scope.go:117] "RemoveContainer" containerID="f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee" Feb 14 14:16:02 crc kubenswrapper[4750]: E0214 14:16:02.618166 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee\": container with ID starting with f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee not found: ID does not exist" containerID="f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.618189 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee"} err="failed to get container status \"f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee\": rpc error: code = NotFound desc = could not find container \"f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee\": container with ID starting with f76b478f93452bbbfa748137dca17aeae7e445439c4dcb9654c1cfdb66f8f5ee not found: ID does not exist" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.803758 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925a32d5-f7e2-4339-84ea-a5011abb19f1" path="/var/lib/kubelet/pods/925a32d5-f7e2-4339-84ea-a5011abb19f1/volumes" Feb 14 14:16:02 crc kubenswrapper[4750]: I0214 14:16:02.872530 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.022781 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crrxv\" (UniqueName: \"kubernetes.io/projected/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-kube-api-access-crrxv\") pod \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\" (UID: \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\") " Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.023331 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-operator-scripts\") pod \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\" (UID: \"075e5fd7-94c0-48b5-9c6d-a7994c4479f7\") " Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.069639 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "075e5fd7-94c0-48b5-9c6d-a7994c4479f7" (UID: "075e5fd7-94c0-48b5-9c6d-a7994c4479f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.102345 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-kube-api-access-crrxv" (OuterVolumeSpecName: "kube-api-access-crrxv") pod "075e5fd7-94c0-48b5-9c6d-a7994c4479f7" (UID: "075e5fd7-94c0-48b5-9c6d-a7994c4479f7"). InnerVolumeSpecName "kube-api-access-crrxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.126684 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crrxv\" (UniqueName: \"kubernetes.io/projected/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-kube-api-access-crrxv\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.126718 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075e5fd7-94c0-48b5-9c6d-a7994c4479f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.201966 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.212511 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j27zc" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.347055 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trt97\" (UniqueName: \"kubernetes.io/projected/55c02c14-a300-466f-b53c-8f7a96be136a-kube-api-access-trt97\") pod \"55c02c14-a300-466f-b53c-8f7a96be136a\" (UID: \"55c02c14-a300-466f-b53c-8f7a96be136a\") " Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.347419 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhvp7\" (UniqueName: \"kubernetes.io/projected/e32c67d8-34c3-4340-b584-704ba0540dc7-kube-api-access-jhvp7\") pod \"e32c67d8-34c3-4340-b584-704ba0540dc7\" (UID: \"e32c67d8-34c3-4340-b584-704ba0540dc7\") " Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.347499 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e32c67d8-34c3-4340-b584-704ba0540dc7-operator-scripts\") pod \"e32c67d8-34c3-4340-b584-704ba0540dc7\" (UID: \"e32c67d8-34c3-4340-b584-704ba0540dc7\") " Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.347535 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55c02c14-a300-466f-b53c-8f7a96be136a-operator-scripts\") pod \"55c02c14-a300-466f-b53c-8f7a96be136a\" (UID: \"55c02c14-a300-466f-b53c-8f7a96be136a\") " Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.350517 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32c67d8-34c3-4340-b584-704ba0540dc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e32c67d8-34c3-4340-b584-704ba0540dc7" (UID: "e32c67d8-34c3-4340-b584-704ba0540dc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.351588 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c02c14-a300-466f-b53c-8f7a96be136a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55c02c14-a300-466f-b53c-8f7a96be136a" (UID: "55c02c14-a300-466f-b53c-8f7a96be136a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.360501 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e32c67d8-34c3-4340-b584-704ba0540dc7-kube-api-access-jhvp7" (OuterVolumeSpecName: "kube-api-access-jhvp7") pod "e32c67d8-34c3-4340-b584-704ba0540dc7" (UID: "e32c67d8-34c3-4340-b584-704ba0540dc7"). InnerVolumeSpecName "kube-api-access-jhvp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.364559 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e32c67d8-34c3-4340-b584-704ba0540dc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.364614 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55c02c14-a300-466f-b53c-8f7a96be136a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.364652 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhvp7\" (UniqueName: \"kubernetes.io/projected/e32c67d8-34c3-4340-b584-704ba0540dc7-kube-api-access-jhvp7\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.366612 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c02c14-a300-466f-b53c-8f7a96be136a-kube-api-access-trt97" (OuterVolumeSpecName: "kube-api-access-trt97") pod "55c02c14-a300-466f-b53c-8f7a96be136a" (UID: "55c02c14-a300-466f-b53c-8f7a96be136a"). InnerVolumeSpecName "kube-api-access-trt97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.467468 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trt97\" (UniqueName: \"kubernetes.io/projected/55c02c14-a300-466f-b53c-8f7a96be136a-kube-api-access-trt97\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.495867 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f8b-account-create-update-8qtk7" event={"ID":"075e5fd7-94c0-48b5-9c6d-a7994c4479f7","Type":"ContainerDied","Data":"79eb9885a3b84636316260344fb20ba91053ec82f5e93661e2ec135e9ae055f5"} Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.495913 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79eb9885a3b84636316260344fb20ba91053ec82f5e93661e2ec135e9ae055f5" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.495997 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f8b-account-create-update-8qtk7" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.516382 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe85d9dd-19fc-4155-af2c-62cc62eb029c","Type":"ContainerStarted","Data":"21330e27bcdcdeef8bb7dd95fe7b0bbc1b5d491ed8b2a8bd666c9bac54e8154a"} Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.534370 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"27ce82d4-dcbe-48fe-8b91-8704ef172bf1","Type":"ContainerStarted","Data":"ec3111506ddd436d4fbb54fa1f4c2ed58af3bdad6ffed2ee743a6886122e2285"} Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.581181 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j27zc" event={"ID":"55c02c14-a300-466f-b53c-8f7a96be136a","Type":"ContainerDied","Data":"054e15e353de1b3c2c621bb4007658a77536d407344ae849e202c6e19db897c8"} Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.581233 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="054e15e353de1b3c2c621bb4007658a77536d407344ae849e202c6e19db897c8" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.583397 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j27zc" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.603440 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.603419864 podStartE2EDuration="4.603419864s" podCreationTimestamp="2026-02-14 14:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:03.570840906 +0000 UTC m=+1435.596830387" watchObservedRunningTime="2026-02-14 14:16:03.603419864 +0000 UTC m=+1435.629409345" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.617738 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a9248d-fe82-42eb-8b98-cc1812b56dc7","Type":"ContainerStarted","Data":"3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf"} Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.647121 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lm7k9" event={"ID":"e32c67d8-34c3-4340-b584-704ba0540dc7","Type":"ContainerDied","Data":"ce8a22d398db31cd01aef76c6753a0d4d5fe9bab90747530f09e86616af57b8d"} Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.647164 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8a22d398db31cd01aef76c6753a0d4d5fe9bab90747530f09e86616af57b8d" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.647244 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lm7k9" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.661437 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ecffb890-5905-4fc1-a005-86519c0c6aea","Type":"ContainerStarted","Data":"2f52ec796cfcd5db0764b78f50862cdb4214e7c8bdf80b58e5f5357500d0d946"} Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.662498 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 14:16:03 crc kubenswrapper[4750]: I0214 14:16:03.923431 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.923414152 podStartE2EDuration="6.923414152s" podCreationTimestamp="2026-02-14 14:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:03.713790815 +0000 UTC m=+1435.739780296" watchObservedRunningTime="2026-02-14 14:16:03.923414152 +0000 UTC m=+1435.949403633" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.180991 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.282186 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9bp8d"] Feb 14 14:16:04 crc kubenswrapper[4750]: E0214 14:16:04.282693 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d155439-48d6-4b0b-b096-70e2527f2d7f" containerName="mariadb-account-create-update" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.282709 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d155439-48d6-4b0b-b096-70e2527f2d7f" containerName="mariadb-account-create-update" Feb 14 14:16:04 crc kubenswrapper[4750]: E0214 14:16:04.282735 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerName="neutron-httpd" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.282741 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerName="neutron-httpd" Feb 14 14:16:04 crc kubenswrapper[4750]: E0214 14:16:04.282749 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32c67d8-34c3-4340-b584-704ba0540dc7" containerName="mariadb-database-create" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.282755 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32c67d8-34c3-4340-b584-704ba0540dc7" containerName="mariadb-database-create" Feb 14 14:16:04 crc kubenswrapper[4750]: E0214 14:16:04.282768 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075e5fd7-94c0-48b5-9c6d-a7994c4479f7" containerName="mariadb-account-create-update" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.282773 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="075e5fd7-94c0-48b5-9c6d-a7994c4479f7" containerName="mariadb-account-create-update" Feb 14 14:16:04 crc kubenswrapper[4750]: E0214 14:16:04.282782 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerName="neutron-api" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.282788 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerName="neutron-api" Feb 14 14:16:04 crc kubenswrapper[4750]: E0214 14:16:04.282798 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c02c14-a300-466f-b53c-8f7a96be136a" containerName="mariadb-database-create" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.282804 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c02c14-a300-466f-b53c-8f7a96be136a" containerName="mariadb-database-create" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.283002 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c02c14-a300-466f-b53c-8f7a96be136a" containerName="mariadb-database-create" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.283018 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="075e5fd7-94c0-48b5-9c6d-a7994c4479f7" containerName="mariadb-account-create-update" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.283026 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e32c67d8-34c3-4340-b584-704ba0540dc7" containerName="mariadb-database-create" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.283040 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerName="neutron-httpd" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.283050 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="925a32d5-f7e2-4339-84ea-a5011abb19f1" containerName="neutron-api" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.283061 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d155439-48d6-4b0b-b096-70e2527f2d7f" containerName="mariadb-account-create-update" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.284748 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.298006 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcd9j\" (UniqueName: \"kubernetes.io/projected/1d155439-48d6-4b0b-b096-70e2527f2d7f-kube-api-access-bcd9j\") pod \"1d155439-48d6-4b0b-b096-70e2527f2d7f\" (UID: \"1d155439-48d6-4b0b-b096-70e2527f2d7f\") " Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.298389 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d155439-48d6-4b0b-b096-70e2527f2d7f-operator-scripts\") pod \"1d155439-48d6-4b0b-b096-70e2527f2d7f\" (UID: \"1d155439-48d6-4b0b-b096-70e2527f2d7f\") " Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.299776 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d155439-48d6-4b0b-b096-70e2527f2d7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d155439-48d6-4b0b-b096-70e2527f2d7f" (UID: "1d155439-48d6-4b0b-b096-70e2527f2d7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.309068 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d155439-48d6-4b0b-b096-70e2527f2d7f-kube-api-access-bcd9j" (OuterVolumeSpecName: "kube-api-access-bcd9j") pod "1d155439-48d6-4b0b-b096-70e2527f2d7f" (UID: "1d155439-48d6-4b0b-b096-70e2527f2d7f"). InnerVolumeSpecName "kube-api-access-bcd9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.330532 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9bp8d"] Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.404452 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-catalog-content\") pod \"redhat-operators-9bp8d\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.404624 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-utilities\") pod \"redhat-operators-9bp8d\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.404653 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxjn\" (UniqueName: \"kubernetes.io/projected/864aae15-a641-418d-9df3-8d91a914ccfa-kube-api-access-qcxjn\") pod \"redhat-operators-9bp8d\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.404925 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d155439-48d6-4b0b-b096-70e2527f2d7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.404945 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcd9j\" (UniqueName: \"kubernetes.io/projected/1d155439-48d6-4b0b-b096-70e2527f2d7f-kube-api-access-bcd9j\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.452069 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.486375 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.514651 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-catalog-content\") pod \"redhat-operators-9bp8d\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.514802 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-utilities\") pod \"redhat-operators-9bp8d\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.514821 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxjn\" (UniqueName: \"kubernetes.io/projected/864aae15-a641-418d-9df3-8d91a914ccfa-kube-api-access-qcxjn\") pod \"redhat-operators-9bp8d\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.515492 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-catalog-content\") pod \"redhat-operators-9bp8d\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.515694 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-utilities\") pod \"redhat-operators-9bp8d\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.553979 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxjn\" (UniqueName: \"kubernetes.io/projected/864aae15-a641-418d-9df3-8d91a914ccfa-kube-api-access-qcxjn\") pod \"redhat-operators-9bp8d\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.557772 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.617430 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqw2g\" (UniqueName: \"kubernetes.io/projected/c038eeb5-80d1-4e94-a5de-31b056dac6f7-kube-api-access-wqw2g\") pod \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\" (UID: \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\") " Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.617829 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwwxn\" (UniqueName: \"kubernetes.io/projected/da7aabd1-3708-46b0-a20d-25fb29aa26ef-kube-api-access-jwwxn\") pod \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\" (UID: \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\") " Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.617910 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da7aabd1-3708-46b0-a20d-25fb29aa26ef-operator-scripts\") pod \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\" (UID: \"da7aabd1-3708-46b0-a20d-25fb29aa26ef\") " Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.617956 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c038eeb5-80d1-4e94-a5de-31b056dac6f7-operator-scripts\") pod \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\" (UID: \"c038eeb5-80d1-4e94-a5de-31b056dac6f7\") " Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.621621 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c038eeb5-80d1-4e94-a5de-31b056dac6f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c038eeb5-80d1-4e94-a5de-31b056dac6f7" (UID: "c038eeb5-80d1-4e94-a5de-31b056dac6f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.622726 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da7aabd1-3708-46b0-a20d-25fb29aa26ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da7aabd1-3708-46b0-a20d-25fb29aa26ef" (UID: "da7aabd1-3708-46b0-a20d-25fb29aa26ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.627571 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7aabd1-3708-46b0-a20d-25fb29aa26ef-kube-api-access-jwwxn" (OuterVolumeSpecName: "kube-api-access-jwwxn") pod "da7aabd1-3708-46b0-a20d-25fb29aa26ef" (UID: "da7aabd1-3708-46b0-a20d-25fb29aa26ef"). InnerVolumeSpecName "kube-api-access-jwwxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.628479 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c038eeb5-80d1-4e94-a5de-31b056dac6f7-kube-api-access-wqw2g" (OuterVolumeSpecName: "kube-api-access-wqw2g") pod "c038eeb5-80d1-4e94-a5de-31b056dac6f7" (UID: "c038eeb5-80d1-4e94-a5de-31b056dac6f7"). InnerVolumeSpecName "kube-api-access-wqw2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.653882 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.663472 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b6899f7-wlrpr" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.692745 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a9248d-fe82-42eb-8b98-cc1812b56dc7","Type":"ContainerStarted","Data":"0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899"} Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.692786 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a9248d-fe82-42eb-8b98-cc1812b56dc7","Type":"ContainerStarted","Data":"5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba"} Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.705126 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" event={"ID":"da7aabd1-3708-46b0-a20d-25fb29aa26ef","Type":"ContainerDied","Data":"0f25203ea71d12dbf6499e57525928f3035be9818aed0272d2465c1e261cb6cf"} Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.705158 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f25203ea71d12dbf6499e57525928f3035be9818aed0272d2465c1e261cb6cf" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.705211 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e02b-account-create-update-gvd4d" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.720167 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da7aabd1-3708-46b0-a20d-25fb29aa26ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.720195 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c038eeb5-80d1-4e94-a5de-31b056dac6f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.720204 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqw2g\" (UniqueName: \"kubernetes.io/projected/c038eeb5-80d1-4e94-a5de-31b056dac6f7-kube-api-access-wqw2g\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.720215 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwwxn\" (UniqueName: \"kubernetes.io/projected/da7aabd1-3708-46b0-a20d-25fb29aa26ef-kube-api-access-jwwxn\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.721347 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"27ce82d4-dcbe-48fe-8b91-8704ef172bf1","Type":"ContainerStarted","Data":"1e6b35feb1a14a63d78a5e30a79e89098682143bcfbc2531597547fb5ec268ad"} Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.741875 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.766203 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.766181769 podStartE2EDuration="4.766181769s" podCreationTimestamp="2026-02-14 14:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:04.753137458 +0000 UTC m=+1436.779126939" watchObservedRunningTime="2026-02-14 14:16:04.766181769 +0000 UTC m=+1436.792171250" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.785641 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zxk9n" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.799196 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9dff-account-create-update-sjzvt" event={"ID":"1d155439-48d6-4b0b-b096-70e2527f2d7f","Type":"ContainerDied","Data":"275eefc069f5aea05c662c1050706af645cc0e4676c787339ea79ff26ffb7a83"} Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.799235 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="275eefc069f5aea05c662c1050706af645cc0e4676c787339ea79ff26ffb7a83" Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.799248 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zxk9n" event={"ID":"c038eeb5-80d1-4e94-a5de-31b056dac6f7","Type":"ContainerDied","Data":"fc3ac522eeababf156563a05f50233d388a335ed3e7eef21efd7816dfe343d78"} Feb 14 14:16:04 crc kubenswrapper[4750]: I0214 14:16:04.799259 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc3ac522eeababf156563a05f50233d388a335ed3e7eef21efd7816dfe343d78" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.080903 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5794bb6d69-xd29g"] Feb 14 14:16:05 crc kubenswrapper[4750]: E0214 14:16:05.081489 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c038eeb5-80d1-4e94-a5de-31b056dac6f7" containerName="mariadb-database-create" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.081504 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c038eeb5-80d1-4e94-a5de-31b056dac6f7" containerName="mariadb-database-create" Feb 14 14:16:05 crc kubenswrapper[4750]: E0214 14:16:05.081517 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7aabd1-3708-46b0-a20d-25fb29aa26ef" containerName="mariadb-account-create-update" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.081524 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7aabd1-3708-46b0-a20d-25fb29aa26ef" containerName="mariadb-account-create-update" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.081733 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="c038eeb5-80d1-4e94-a5de-31b056dac6f7" containerName="mariadb-database-create" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.081749 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7aabd1-3708-46b0-a20d-25fb29aa26ef" containerName="mariadb-account-create-update" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.082697 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.085147 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.100436 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.100694 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-dc62p" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.105928 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5794bb6d69-xd29g"] Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.150205 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-59cf69fd49-rvv8r"] Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.153840 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.160563 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.185182 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-59cf69fd49-rvv8r"] Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.233234 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-8fdrk"] Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.235583 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.243974 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-8fdrk"] Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.244599 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.244634 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-combined-ca-bundle\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.244679 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-combined-ca-bundle\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.244781 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7skvb\" (UniqueName: \"kubernetes.io/projected/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-kube-api-access-7skvb\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.244815 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.245556 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data-custom\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.245656 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data-custom\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.245683 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzsjd\" (UniqueName: \"kubernetes.io/projected/3d9c8c1e-f288-4429-b701-acc9dd04808e-kube-api-access-nzsjd\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.266942 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9bp8d"] Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.284813 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-69f59c97bd-6wd8f"] Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.290354 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.294501 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.311892 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69f59c97bd-6wd8f"] Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350409 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-combined-ca-bundle\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350504 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-combined-ca-bundle\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350537 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data-custom\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350574 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7skvb\" (UniqueName: \"kubernetes.io/projected/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-kube-api-access-7skvb\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350596 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350627 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350645 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350665 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdtmx\" (UniqueName: \"kubernetes.io/projected/32b3ee0b-ff6c-43d9-88e9-cad127186847-kube-api-access-bdtmx\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350703 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data-custom\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350727 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-config\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350743 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350773 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmtxg\" (UniqueName: \"kubernetes.io/projected/b2ed0d86-71db-4bad-a5db-e596444d65f6-kube-api-access-lmtxg\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350800 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data-custom\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350819 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzsjd\" (UniqueName: \"kubernetes.io/projected/3d9c8c1e-f288-4429-b701-acc9dd04808e-kube-api-access-nzsjd\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350841 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.350905 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.355245 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.355346 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-combined-ca-bundle\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.359910 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data-custom\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.362195 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data-custom\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.363512 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.370240 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.382990 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7skvb\" (UniqueName: \"kubernetes.io/projected/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-kube-api-access-7skvb\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.388557 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzsjd\" (UniqueName: \"kubernetes.io/projected/3d9c8c1e-f288-4429-b701-acc9dd04808e-kube-api-access-nzsjd\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.390155 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-combined-ca-bundle\") pod \"heat-engine-5794bb6d69-xd29g\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.393951 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-combined-ca-bundle\") pod \"heat-cfnapi-59cf69fd49-rvv8r\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.416759 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.462678 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-combined-ca-bundle\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.462747 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data-custom\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.462802 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.462847 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.462879 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdtmx\" (UniqueName: \"kubernetes.io/projected/32b3ee0b-ff6c-43d9-88e9-cad127186847-kube-api-access-bdtmx\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.462952 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-config\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.462974 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.463016 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmtxg\" (UniqueName: \"kubernetes.io/projected/b2ed0d86-71db-4bad-a5db-e596444d65f6-kube-api-access-lmtxg\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.463065 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.463129 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.467458 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.467975 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.468675 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.469598 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-config\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.471373 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.474205 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data-custom\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.476508 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-combined-ca-bundle\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.482158 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.489779 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdtmx\" (UniqueName: \"kubernetes.io/projected/32b3ee0b-ff6c-43d9-88e9-cad127186847-kube-api-access-bdtmx\") pod \"heat-api-69f59c97bd-6wd8f\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.495866 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmtxg\" (UniqueName: \"kubernetes.io/projected/b2ed0d86-71db-4bad-a5db-e596444d65f6-kube-api-access-lmtxg\") pod \"dnsmasq-dns-7756b9d78c-8fdrk\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.496434 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.567126 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.583470 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.856462 4750 generic.go:334] "Generic (PLEG): container finished" podID="864aae15-a641-418d-9df3-8d91a914ccfa" containerID="1ebc0634f2a1682681f747e320af0cafc7271be862342adf8c30291df5f00e4a" exitCode=0 Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.856716 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bp8d" event={"ID":"864aae15-a641-418d-9df3-8d91a914ccfa","Type":"ContainerDied","Data":"1ebc0634f2a1682681f747e320af0cafc7271be862342adf8c30291df5f00e4a"} Feb 14 14:16:05 crc kubenswrapper[4750]: I0214 14:16:05.856908 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bp8d" event={"ID":"864aae15-a641-418d-9df3-8d91a914ccfa","Type":"ContainerStarted","Data":"96f2e0266cf554aea9d0a7728940d6e35c973b44fc2aaca237b44b5e99883ea9"} Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.596247 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5794bb6d69-xd29g"] Feb 14 14:16:06 crc kubenswrapper[4750]: W0214 14:16:06.596694 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d9c8c1e_f288_4429_b701_acc9dd04808e.slice/crio-2eb69e2cf75c16fd0bf91a4ae7efce69e20fb9b7e86d230876632020f1348861 WatchSource:0}: Error finding container 2eb69e2cf75c16fd0bf91a4ae7efce69e20fb9b7e86d230876632020f1348861: Status 404 returned error can't find the container with id 2eb69e2cf75c16fd0bf91a4ae7efce69e20fb9b7e86d230876632020f1348861 Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.639343 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69f59c97bd-6wd8f"] Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.682881 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-59cf69fd49-rvv8r"] Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.701192 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-8fdrk"] Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.879924 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5794bb6d69-xd29g" event={"ID":"3d9c8c1e-f288-4429-b701-acc9dd04808e","Type":"ContainerStarted","Data":"2eb69e2cf75c16fd0bf91a4ae7efce69e20fb9b7e86d230876632020f1348861"} Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.880843 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" event={"ID":"b2ed0d86-71db-4bad-a5db-e596444d65f6","Type":"ContainerStarted","Data":"3e37ca3a66ce60072db55ed5f910089011cf3546d38a5669164dbf5b7bc7944b"} Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.881587 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" event={"ID":"e82b038d-e8f2-47be-8903-eb7a0a0f70fd","Type":"ContainerStarted","Data":"0d355e550a61ebd70d05370097a6144dbaa4d16681af087e86c932ef2b734f64"} Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.887658 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bp8d" event={"ID":"864aae15-a641-418d-9df3-8d91a914ccfa","Type":"ContainerStarted","Data":"95a0c5ff717fdce072867fdafc844ca0399014a525ac5e0204047083990a755e"} Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.890316 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69f59c97bd-6wd8f" event={"ID":"32b3ee0b-ff6c-43d9-88e9-cad127186847","Type":"ContainerStarted","Data":"44e61ab5279ffa180a227310152746f1d196012b244cc37c06c3b078826d82a6"} Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.893547 4750 generic.go:334] "Generic (PLEG): container finished" podID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerID="fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd" exitCode=1 Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.893603 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a9248d-fe82-42eb-8b98-cc1812b56dc7","Type":"ContainerDied","Data":"fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd"} Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.893699 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="ceilometer-central-agent" containerID="cri-o://3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf" gracePeriod=30 Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.893802 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="sg-core" containerID="cri-o://0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899" gracePeriod=30 Feb 14 14:16:06 crc kubenswrapper[4750]: I0214 14:16:06.893842 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="ceilometer-notification-agent" containerID="cri-o://5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba" gracePeriod=30 Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.189340 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kfvr"] Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.200147 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.210717 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.210995 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.211280 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-psvbd" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.211428 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kfvr"] Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.316432 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-scripts\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.316681 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.316823 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwm6s\" (UniqueName: \"kubernetes.io/projected/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-kube-api-access-wwm6s\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.316926 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-config-data\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.424692 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwm6s\" (UniqueName: \"kubernetes.io/projected/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-kube-api-access-wwm6s\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.424791 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-config-data\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.427469 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-scripts\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.427655 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.433273 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.433670 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-scripts\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.438788 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-config-data\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.446888 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwm6s\" (UniqueName: \"kubernetes.io/projected/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-kube-api-access-wwm6s\") pod \"nova-cell0-conductor-db-sync-2kfvr\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.543130 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.910921 4750 generic.go:334] "Generic (PLEG): container finished" podID="b2ed0d86-71db-4bad-a5db-e596444d65f6" containerID="42139d7feee7fc43603e45a1ed86961e488202069bf483d00431d3caa6f156aa" exitCode=0 Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.910986 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" event={"ID":"b2ed0d86-71db-4bad-a5db-e596444d65f6","Type":"ContainerDied","Data":"42139d7feee7fc43603e45a1ed86961e488202069bf483d00431d3caa6f156aa"} Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.919949 4750 generic.go:334] "Generic (PLEG): container finished" podID="864aae15-a641-418d-9df3-8d91a914ccfa" containerID="95a0c5ff717fdce072867fdafc844ca0399014a525ac5e0204047083990a755e" exitCode=0 Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.920017 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bp8d" event={"ID":"864aae15-a641-418d-9df3-8d91a914ccfa","Type":"ContainerDied","Data":"95a0c5ff717fdce072867fdafc844ca0399014a525ac5e0204047083990a755e"} Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.931165 4750 generic.go:334] "Generic (PLEG): container finished" podID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerID="0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899" exitCode=2 Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.931203 4750 generic.go:334] "Generic (PLEG): container finished" podID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerID="5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba" exitCode=0 Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.931303 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a9248d-fe82-42eb-8b98-cc1812b56dc7","Type":"ContainerDied","Data":"0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899"} Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.931338 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a9248d-fe82-42eb-8b98-cc1812b56dc7","Type":"ContainerDied","Data":"5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba"} Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.942909 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5794bb6d69-xd29g" event={"ID":"3d9c8c1e-f288-4429-b701-acc9dd04808e","Type":"ContainerStarted","Data":"719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431"} Feb 14 14:16:07 crc kubenswrapper[4750]: I0214 14:16:07.942975 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:08 crc kubenswrapper[4750]: I0214 14:16:08.002414 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5794bb6d69-xd29g" podStartSLOduration=3.002392639 podStartE2EDuration="3.002392639s" podCreationTimestamp="2026-02-14 14:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:07.988278777 +0000 UTC m=+1440.014268278" watchObservedRunningTime="2026-02-14 14:16:08.002392639 +0000 UTC m=+1440.028382120" Feb 14 14:16:08 crc kubenswrapper[4750]: I0214 14:16:08.120567 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kfvr"] Feb 14 14:16:08 crc kubenswrapper[4750]: I0214 14:16:08.963410 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" event={"ID":"b2ed0d86-71db-4bad-a5db-e596444d65f6","Type":"ContainerStarted","Data":"30c209d1b27b37211b0cd6b45b8493f672acfdde0c2d3872d829ad1220b9ea7f"} Feb 14 14:16:08 crc kubenswrapper[4750]: I0214 14:16:08.965068 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:08 crc kubenswrapper[4750]: I0214 14:16:08.969748 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kfvr" event={"ID":"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf","Type":"ContainerStarted","Data":"7b34d3a80dfeefcddcdec3be6ae78a2a72be812579faef81757d0d1c69b9ef8c"} Feb 14 14:16:09 crc kubenswrapper[4750]: I0214 14:16:09.423501 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 14:16:09 crc kubenswrapper[4750]: I0214 14:16:09.451673 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" podStartSLOduration=4.451549226 podStartE2EDuration="4.451549226s" podCreationTimestamp="2026-02-14 14:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:08.986800898 +0000 UTC m=+1441.012790379" watchObservedRunningTime="2026-02-14 14:16:09.451549226 +0000 UTC m=+1441.477538707" Feb 14 14:16:10 crc kubenswrapper[4750]: I0214 14:16:10.102914 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 14:16:10 crc kubenswrapper[4750]: I0214 14:16:10.104930 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 14:16:10 crc kubenswrapper[4750]: I0214 14:16:10.158562 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 14:16:10 crc kubenswrapper[4750]: I0214 14:16:10.158729 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 14:16:10 crc kubenswrapper[4750]: I0214 14:16:10.991853 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 14:16:10 crc kubenswrapper[4750]: I0214 14:16:10.992635 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.084738 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.084805 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.143669 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.162439 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.865841 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-59b797bddd-xm4wn"] Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.867777 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.896695 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-59b797bddd-xm4wn"] Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.912183 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5748477f5c-8zjgh"] Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.913815 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.941828 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5748477f5c-8zjgh"] Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.966800 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.966869 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8c8h\" (UniqueName: \"kubernetes.io/projected/fd095088-0ec4-428e-bfef-11c7c57ecfdb-kube-api-access-z8c8h\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.966959 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t458r\" (UniqueName: \"kubernetes.io/projected/fe4f92a9-d75d-4965-a7ad-4415676d65b6-kube-api-access-t458r\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.966985 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data-custom\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.967032 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-combined-ca-bundle\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.967130 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.967158 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-combined-ca-bundle\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:11 crc kubenswrapper[4750]: I0214 14:16:11.967278 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data-custom\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.004633 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7d54f5975d-tsf47"] Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.006683 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.025575 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.025840 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d54f5975d-tsf47"] Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.030328 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069334 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-combined-ca-bundle\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069409 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069436 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-combined-ca-bundle\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069511 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069554 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data-custom\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069606 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data-custom\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069629 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-combined-ca-bundle\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069682 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99pgm\" (UniqueName: \"kubernetes.io/projected/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-kube-api-access-99pgm\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069783 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069832 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8c8h\" (UniqueName: \"kubernetes.io/projected/fd095088-0ec4-428e-bfef-11c7c57ecfdb-kube-api-access-z8c8h\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069872 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t458r\" (UniqueName: \"kubernetes.io/projected/fe4f92a9-d75d-4965-a7ad-4415676d65b6-kube-api-access-t458r\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.069887 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data-custom\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.078048 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.079956 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-combined-ca-bundle\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.082854 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-combined-ca-bundle\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.083003 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data-custom\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.083059 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data-custom\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.087897 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.089041 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t458r\" (UniqueName: \"kubernetes.io/projected/fe4f92a9-d75d-4965-a7ad-4415676d65b6-kube-api-access-t458r\") pod \"heat-api-5748477f5c-8zjgh\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.091309 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8c8h\" (UniqueName: \"kubernetes.io/projected/fd095088-0ec4-428e-bfef-11c7c57ecfdb-kube-api-access-z8c8h\") pod \"heat-engine-59b797bddd-xm4wn\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.171939 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data-custom\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.171994 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-combined-ca-bundle\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.172050 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99pgm\" (UniqueName: \"kubernetes.io/projected/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-kube-api-access-99pgm\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.172240 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.176737 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data-custom\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.177470 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-combined-ca-bundle\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.179968 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.190960 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99pgm\" (UniqueName: \"kubernetes.io/projected/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-kube-api-access-99pgm\") pod \"heat-cfnapi-7d54f5975d-tsf47\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.204803 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.236023 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.368371 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.762017 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.781870 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-59b797bddd-xm4wn"] Feb 14 14:16:12 crc kubenswrapper[4750]: W0214 14:16:12.783980 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd095088_0ec4_428e_bfef_11c7c57ecfdb.slice/crio-eb7ba83f1ab3b095221719ae811ff45233935ea826f4a8e9587c6a689dbcbdb5 WatchSource:0}: Error finding container eb7ba83f1ab3b095221719ae811ff45233935ea826f4a8e9587c6a689dbcbdb5: Status 404 returned error can't find the container with id eb7ba83f1ab3b095221719ae811ff45233935ea826f4a8e9587c6a689dbcbdb5 Feb 14 14:16:12 crc kubenswrapper[4750]: I0214 14:16:12.855137 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5474bc9d4d-7h6tg" Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.015684 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6f9f7d688-glpvb"] Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.016264 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6f9f7d688-glpvb" podUID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerName="placement-log" containerID="cri-o://af54dc8bb6b112162d2bc32bd882c1f654dc53ba10caa3ffc47d38a0064e71bb" gracePeriod=30 Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.016424 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6f9f7d688-glpvb" podUID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerName="placement-api" containerID="cri-o://59844409a78c8cef20e9660672e95c489ca8fbf8c1d0d56503a178e40b5ae2ee" gracePeriod=30 Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.058075 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d54f5975d-tsf47"] Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.105475 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" event={"ID":"e82b038d-e8f2-47be-8903-eb7a0a0f70fd","Type":"ContainerStarted","Data":"2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2"} Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.105895 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.131574 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" podStartSLOduration=3.09830227 podStartE2EDuration="8.131557838s" podCreationTimestamp="2026-02-14 14:16:05 +0000 UTC" firstStartedPulling="2026-02-14 14:16:06.634694112 +0000 UTC m=+1438.660683593" lastFinishedPulling="2026-02-14 14:16:11.66794968 +0000 UTC m=+1443.693939161" observedRunningTime="2026-02-14 14:16:13.125867686 +0000 UTC m=+1445.151857167" watchObservedRunningTime="2026-02-14 14:16:13.131557838 +0000 UTC m=+1445.157547319" Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.141906 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" event={"ID":"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b","Type":"ContainerStarted","Data":"5aa36d361b52f457d19ce12d950102f00461f3762cf421b14d710bd92d184926"} Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.147150 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-59b797bddd-xm4wn" event={"ID":"fd095088-0ec4-428e-bfef-11c7c57ecfdb","Type":"ContainerStarted","Data":"eb7ba83f1ab3b095221719ae811ff45233935ea826f4a8e9587c6a689dbcbdb5"} Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.151860 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bp8d" event={"ID":"864aae15-a641-418d-9df3-8d91a914ccfa","Type":"ContainerStarted","Data":"dc4eb70df0864db6c6c5de127a2b2d2ecf0964679372dc96627bb6e89a430442"} Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.157202 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69f59c97bd-6wd8f" event={"ID":"32b3ee0b-ff6c-43d9-88e9-cad127186847","Type":"ContainerStarted","Data":"3022a0f63b83cca6768cda62fc6ce56774fa47b22b69f693e9dd3997f25cbd90"} Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.157577 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.183278 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9bp8d" podStartSLOduration=3.403782832 podStartE2EDuration="9.183158536s" podCreationTimestamp="2026-02-14 14:16:04 +0000 UTC" firstStartedPulling="2026-02-14 14:16:05.882270976 +0000 UTC m=+1437.908260457" lastFinishedPulling="2026-02-14 14:16:11.66164668 +0000 UTC m=+1443.687636161" observedRunningTime="2026-02-14 14:16:13.17484284 +0000 UTC m=+1445.200832321" watchObservedRunningTime="2026-02-14 14:16:13.183158536 +0000 UTC m=+1445.209148027" Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.247089 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-69f59c97bd-6wd8f" podStartSLOduration=3.202886196 podStartE2EDuration="8.247068895s" podCreationTimestamp="2026-02-14 14:16:05 +0000 UTC" firstStartedPulling="2026-02-14 14:16:06.617474542 +0000 UTC m=+1438.643464023" lastFinishedPulling="2026-02-14 14:16:11.661657231 +0000 UTC m=+1443.687646722" observedRunningTime="2026-02-14 14:16:13.198771711 +0000 UTC m=+1445.224761192" watchObservedRunningTime="2026-02-14 14:16:13.247068895 +0000 UTC m=+1445.273058376" Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.505015 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5748477f5c-8zjgh"] Feb 14 14:16:13 crc kubenswrapper[4750]: I0214 14:16:13.596573 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ecffb890-5905-4fc1-a005-86519c0c6aea" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.220:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.007444 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.135083 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-run-httpd\") pod \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.135442 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-scripts\") pod \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.135598 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-sg-core-conf-yaml\") pod \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.135707 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-log-httpd\") pod \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.135861 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-combined-ca-bundle\") pod \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.136044 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-config-data\") pod \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.136233 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kc95\" (UniqueName: \"kubernetes.io/projected/91a9248d-fe82-42eb-8b98-cc1812b56dc7-kube-api-access-7kc95\") pod \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\" (UID: \"91a9248d-fe82-42eb-8b98-cc1812b56dc7\") " Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.137698 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "91a9248d-fe82-42eb-8b98-cc1812b56dc7" (UID: "91a9248d-fe82-42eb-8b98-cc1812b56dc7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.137960 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "91a9248d-fe82-42eb-8b98-cc1812b56dc7" (UID: "91a9248d-fe82-42eb-8b98-cc1812b56dc7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.138408 4750 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.138520 4750 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a9248d-fe82-42eb-8b98-cc1812b56dc7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.148514 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-scripts" (OuterVolumeSpecName: "scripts") pod "91a9248d-fe82-42eb-8b98-cc1812b56dc7" (UID: "91a9248d-fe82-42eb-8b98-cc1812b56dc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.156293 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a9248d-fe82-42eb-8b98-cc1812b56dc7-kube-api-access-7kc95" (OuterVolumeSpecName: "kube-api-access-7kc95") pod "91a9248d-fe82-42eb-8b98-cc1812b56dc7" (UID: "91a9248d-fe82-42eb-8b98-cc1812b56dc7"). InnerVolumeSpecName "kube-api-access-7kc95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.198566 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "91a9248d-fe82-42eb-8b98-cc1812b56dc7" (UID: "91a9248d-fe82-42eb-8b98-cc1812b56dc7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.215756 4750 generic.go:334] "Generic (PLEG): container finished" podID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerID="af54dc8bb6b112162d2bc32bd882c1f654dc53ba10caa3ffc47d38a0064e71bb" exitCode=143 Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.215820 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f9f7d688-glpvb" event={"ID":"dbac059a-8e0e-46a2-857c-b2515fa2a8eb","Type":"ContainerDied","Data":"af54dc8bb6b112162d2bc32bd882c1f654dc53ba10caa3ffc47d38a0064e71bb"} Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.221602 4750 generic.go:334] "Generic (PLEG): container finished" podID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" containerID="4132506503da51a5d3c0296a6d64bd534fce3b6d9266955f0db02c37229cacd2" exitCode=1 Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.221655 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" event={"ID":"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b","Type":"ContainerDied","Data":"4132506503da51a5d3c0296a6d64bd534fce3b6d9266955f0db02c37229cacd2"} Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.222435 4750 scope.go:117] "RemoveContainer" containerID="4132506503da51a5d3c0296a6d64bd534fce3b6d9266955f0db02c37229cacd2" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.235362 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-59b797bddd-xm4wn" event={"ID":"fd095088-0ec4-428e-bfef-11c7c57ecfdb","Type":"ContainerStarted","Data":"1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1"} Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.235524 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.240797 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kc95\" (UniqueName: \"kubernetes.io/projected/91a9248d-fe82-42eb-8b98-cc1812b56dc7-kube-api-access-7kc95\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.240838 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.240851 4750 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.298346 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5748477f5c-8zjgh" event={"ID":"fe4f92a9-d75d-4965-a7ad-4415676d65b6","Type":"ContainerStarted","Data":"9bcfcf1b7dc33f18821fc62f5564d9dba8b0047078fc19075499353e5b12ce06"} Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.298598 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5748477f5c-8zjgh" event={"ID":"fe4f92a9-d75d-4965-a7ad-4415676d65b6","Type":"ContainerStarted","Data":"529f94cef90daac6ebb0327e5356a7ff1f5c2d7ab2925fcd868066ad1f822056"} Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.300241 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.300432 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91a9248d-fe82-42eb-8b98-cc1812b56dc7" (UID: "91a9248d-fe82-42eb-8b98-cc1812b56dc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.323218 4750 generic.go:334] "Generic (PLEG): container finished" podID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerID="3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf" exitCode=0 Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.323600 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.323706 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.325030 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.325753 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a9248d-fe82-42eb-8b98-cc1812b56dc7","Type":"ContainerDied","Data":"3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf"} Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.325856 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a9248d-fe82-42eb-8b98-cc1812b56dc7","Type":"ContainerDied","Data":"68383926e3889e5bbfe71e22e4b17f2fbaa60d2341e4c68af9509311af679834"} Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.325927 4750 scope.go:117] "RemoveContainer" containerID="fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.337313 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5748477f5c-8zjgh" podStartSLOduration=3.337293785 podStartE2EDuration="3.337293785s" podCreationTimestamp="2026-02-14 14:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:14.3163685 +0000 UTC m=+1446.342357981" watchObservedRunningTime="2026-02-14 14:16:14.337293785 +0000 UTC m=+1446.363283266" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.345497 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.351825 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-59b797bddd-xm4wn" podStartSLOduration=3.351806478 podStartE2EDuration="3.351806478s" podCreationTimestamp="2026-02-14 14:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:14.28228584 +0000 UTC m=+1446.308275321" watchObservedRunningTime="2026-02-14 14:16:14.351806478 +0000 UTC m=+1446.377795959" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.371035 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-config-data" (OuterVolumeSpecName: "config-data") pod "91a9248d-fe82-42eb-8b98-cc1812b56dc7" (UID: "91a9248d-fe82-42eb-8b98-cc1812b56dc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.395661 4750 scope.go:117] "RemoveContainer" containerID="0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.447867 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a9248d-fe82-42eb-8b98-cc1812b56dc7-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.469420 4750 scope.go:117] "RemoveContainer" containerID="5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.509815 4750 scope.go:117] "RemoveContainer" containerID="3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.535168 4750 scope.go:117] "RemoveContainer" containerID="fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd" Feb 14 14:16:14 crc kubenswrapper[4750]: E0214 14:16:14.535619 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd\": container with ID starting with fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd not found: ID does not exist" containerID="fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.535649 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd"} err="failed to get container status \"fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd\": rpc error: code = NotFound desc = could not find container \"fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd\": container with ID starting with fa53fe79f684852f269dc1e109a52f2dfc1f77fb9d6db31247d5daccf73e5cdd not found: ID does not exist" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.535670 4750 scope.go:117] "RemoveContainer" containerID="0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899" Feb 14 14:16:14 crc kubenswrapper[4750]: E0214 14:16:14.539821 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899\": container with ID starting with 0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899 not found: ID does not exist" containerID="0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.539847 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899"} err="failed to get container status \"0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899\": rpc error: code = NotFound desc = could not find container \"0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899\": container with ID starting with 0add3f7e0b1e1f80f6dd2973f9100c58842ab3a3750bf81c6e4b246257dd7899 not found: ID does not exist" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.539865 4750 scope.go:117] "RemoveContainer" containerID="5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba" Feb 14 14:16:14 crc kubenswrapper[4750]: E0214 14:16:14.542275 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba\": container with ID starting with 5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba not found: ID does not exist" containerID="5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.542301 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba"} err="failed to get container status \"5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba\": rpc error: code = NotFound desc = could not find container \"5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba\": container with ID starting with 5d2551bb218f180866238e96bb949017ce950f0ee5d8dc01ed9c76bafab83cba not found: ID does not exist" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.542316 4750 scope.go:117] "RemoveContainer" containerID="3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf" Feb 14 14:16:14 crc kubenswrapper[4750]: E0214 14:16:14.542584 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf\": container with ID starting with 3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf not found: ID does not exist" containerID="3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.542605 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf"} err="failed to get container status \"3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf\": rpc error: code = NotFound desc = could not find container \"3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf\": container with ID starting with 3ef9e643784410f8b499a75568715105757ec079b6ee3c5c71414eb6abf48eaf not found: ID does not exist" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.567052 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.567315 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.692917 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.708654 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.740294 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:14 crc kubenswrapper[4750]: E0214 14:16:14.740805 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="sg-core" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.740822 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="sg-core" Feb 14 14:16:14 crc kubenswrapper[4750]: E0214 14:16:14.740839 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="ceilometer-central-agent" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.740847 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="ceilometer-central-agent" Feb 14 14:16:14 crc kubenswrapper[4750]: E0214 14:16:14.740877 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="ceilometer-notification-agent" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.740886 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="ceilometer-notification-agent" Feb 14 14:16:14 crc kubenswrapper[4750]: E0214 14:16:14.740903 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="proxy-httpd" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.740910 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="proxy-httpd" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.741211 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="ceilometer-notification-agent" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.741235 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="sg-core" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.741245 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="ceilometer-central-agent" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.741255 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" containerName="proxy-httpd" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.744306 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.746891 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.748678 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.748813 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.773425 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a9248d-fe82-42eb-8b98-cc1812b56dc7" path="/var/lib/kubelet/pods/91a9248d-fe82-42eb-8b98-cc1812b56dc7/volumes" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.781829 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.920614 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.920762 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-log-httpd\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.920807 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.920837 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-config-data\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.920856 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2xx\" (UniqueName: \"kubernetes.io/projected/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-kube-api-access-zz2xx\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.920909 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-run-httpd\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.920933 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:14 crc kubenswrapper[4750]: I0214 14:16:14.920978 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-scripts\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.022830 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.022933 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-log-httpd\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.022973 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.023002 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-config-data\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.023021 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz2xx\" (UniqueName: \"kubernetes.io/projected/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-kube-api-access-zz2xx\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.023081 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-run-httpd\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.023107 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.023164 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-scripts\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.023516 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-log-httpd\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.023585 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-run-httpd\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.035816 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.035933 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.037519 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-config-data\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.039737 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.060975 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-scripts\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.084825 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz2xx\" (UniqueName: \"kubernetes.io/projected/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-kube-api-access-zz2xx\") pod \"ceilometer-0\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.342411 4750 generic.go:334] "Generic (PLEG): container finished" podID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" containerID="c1813b91b15ae4bd88b6be562abfbd868dc043197c782be1f238dcb8201ee9c8" exitCode=1 Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.342734 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" event={"ID":"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b","Type":"ContainerDied","Data":"c1813b91b15ae4bd88b6be562abfbd868dc043197c782be1f238dcb8201ee9c8"} Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.342770 4750 scope.go:117] "RemoveContainer" containerID="4132506503da51a5d3c0296a6d64bd534fce3b6d9266955f0db02c37229cacd2" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.343530 4750 scope.go:117] "RemoveContainer" containerID="c1813b91b15ae4bd88b6be562abfbd868dc043197c782be1f238dcb8201ee9c8" Feb 14 14:16:15 crc kubenswrapper[4750]: E0214 14:16:15.343821 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7d54f5975d-tsf47_openstack(1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b)\"" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.347104 4750 generic.go:334] "Generic (PLEG): container finished" podID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" containerID="9bcfcf1b7dc33f18821fc62f5564d9dba8b0047078fc19075499353e5b12ce06" exitCode=1 Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.348158 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5748477f5c-8zjgh" event={"ID":"fe4f92a9-d75d-4965-a7ad-4415676d65b6","Type":"ContainerDied","Data":"9bcfcf1b7dc33f18821fc62f5564d9dba8b0047078fc19075499353e5b12ce06"} Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.348803 4750 scope.go:117] "RemoveContainer" containerID="9bcfcf1b7dc33f18821fc62f5564d9dba8b0047078fc19075499353e5b12ce06" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.370738 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.585338 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.662054 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9bp8d" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="registry-server" probeResult="failure" output=< Feb 14 14:16:15 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:16:15 crc kubenswrapper[4750]: > Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.711643 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-c54ql"] Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.711879 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" podUID="a920ce34-52b6-45f6-ac27-3aa85015c234" containerName="dnsmasq-dns" containerID="cri-o://003af2487d17274f859bb41798795b9cc1317e7cb38338e850cb05caa7b7e0ee" gracePeriod=10 Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.879524 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-59cf69fd49-rvv8r"] Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.879700 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" podUID="e82b038d-e8f2-47be-8903-eb7a0a0f70fd" containerName="heat-cfnapi" containerID="cri-o://2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2" gracePeriod=60 Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.921450 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84668977c4-sql9c"] Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.924872 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.933798 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.934013 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 14 14:16:15 crc kubenswrapper[4750]: I0214 14:16:15.965504 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84668977c4-sql9c"] Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.018072 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-69f59c97bd-6wd8f"] Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.032134 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-77656b9796-ff5r4"] Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.033765 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.061573 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77656b9796-ff5r4"] Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.069766 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data-custom\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.069863 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-combined-ca-bundle\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.069892 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-internal-tls-certs\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.069917 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data-custom\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.069945 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95m7g\" (UniqueName: \"kubernetes.io/projected/1e1b3da1-2a06-4b24-86e6-921864918a8e-kube-api-access-95m7g\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.069978 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-combined-ca-bundle\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.070020 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-internal-tls-certs\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.070064 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-public-tls-certs\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.070101 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwshx\" (UniqueName: \"kubernetes.io/projected/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-kube-api-access-xwshx\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.070147 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-public-tls-certs\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.070166 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.070193 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.077394 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.078060 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.178983 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179053 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179336 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data-custom\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179466 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-combined-ca-bundle\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179532 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-internal-tls-certs\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179561 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data-custom\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179615 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95m7g\" (UniqueName: \"kubernetes.io/projected/1e1b3da1-2a06-4b24-86e6-921864918a8e-kube-api-access-95m7g\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179662 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-combined-ca-bundle\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179767 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-internal-tls-certs\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179840 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-public-tls-certs\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179914 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwshx\" (UniqueName: \"kubernetes.io/projected/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-kube-api-access-xwshx\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.179981 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-public-tls-certs\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.191706 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-combined-ca-bundle\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.193581 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data-custom\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.196254 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-internal-tls-certs\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.196303 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.197553 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data-custom\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.200961 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-public-tls-certs\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.203952 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.213811 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.214471 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-combined-ca-bundle\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.215042 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-internal-tls-certs\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.215333 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-public-tls-certs\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.224035 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95m7g\" (UniqueName: \"kubernetes.io/projected/1e1b3da1-2a06-4b24-86e6-921864918a8e-kube-api-access-95m7g\") pod \"heat-api-77656b9796-ff5r4\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.224456 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwshx\" (UniqueName: \"kubernetes.io/projected/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-kube-api-access-xwshx\") pod \"heat-cfnapi-84668977c4-sql9c\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.273844 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.402411 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.446532 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerStarted","Data":"302acbb181d47a731cf6d9990ccdac18ab5dc76897054e6bc33309837e805c0f"} Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.459624 4750 scope.go:117] "RemoveContainer" containerID="c1813b91b15ae4bd88b6be562abfbd868dc043197c782be1f238dcb8201ee9c8" Feb 14 14:16:16 crc kubenswrapper[4750]: E0214 14:16:16.460326 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7d54f5975d-tsf47_openstack(1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b)\"" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.478311 4750 generic.go:334] "Generic (PLEG): container finished" podID="a920ce34-52b6-45f6-ac27-3aa85015c234" containerID="003af2487d17274f859bb41798795b9cc1317e7cb38338e850cb05caa7b7e0ee" exitCode=0 Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.478402 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" event={"ID":"a920ce34-52b6-45f6-ac27-3aa85015c234","Type":"ContainerDied","Data":"003af2487d17274f859bb41798795b9cc1317e7cb38338e850cb05caa7b7e0ee"} Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.510715 4750 generic.go:334] "Generic (PLEG): container finished" podID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" containerID="7683332fffa29f3109d705e3912accc25d48ed5b7e6c73ea8b94e4fd4e459ad5" exitCode=1 Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.511595 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5748477f5c-8zjgh" event={"ID":"fe4f92a9-d75d-4965-a7ad-4415676d65b6","Type":"ContainerDied","Data":"7683332fffa29f3109d705e3912accc25d48ed5b7e6c73ea8b94e4fd4e459ad5"} Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.511628 4750 scope.go:117] "RemoveContainer" containerID="9bcfcf1b7dc33f18821fc62f5564d9dba8b0047078fc19075499353e5b12ce06" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.512527 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-69f59c97bd-6wd8f" podUID="32b3ee0b-ff6c-43d9-88e9-cad127186847" containerName="heat-api" containerID="cri-o://3022a0f63b83cca6768cda62fc6ce56774fa47b22b69f693e9dd3997f25cbd90" gracePeriod=60 Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.513277 4750 scope.go:117] "RemoveContainer" containerID="7683332fffa29f3109d705e3912accc25d48ed5b7e6c73ea8b94e4fd4e459ad5" Feb 14 14:16:16 crc kubenswrapper[4750]: E0214 14:16:16.513876 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5748477f5c-8zjgh_openstack(fe4f92a9-d75d-4965-a7ad-4415676d65b6)\"" pod="openstack/heat-api-5748477f5c-8zjgh" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.721824 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.823087 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxdkf\" (UniqueName: \"kubernetes.io/projected/a920ce34-52b6-45f6-ac27-3aa85015c234-kube-api-access-gxdkf\") pod \"a920ce34-52b6-45f6-ac27-3aa85015c234\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.823220 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-config\") pod \"a920ce34-52b6-45f6-ac27-3aa85015c234\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.823287 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-swift-storage-0\") pod \"a920ce34-52b6-45f6-ac27-3aa85015c234\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.823348 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-svc\") pod \"a920ce34-52b6-45f6-ac27-3aa85015c234\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.823409 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-nb\") pod \"a920ce34-52b6-45f6-ac27-3aa85015c234\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.823440 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-sb\") pod \"a920ce34-52b6-45f6-ac27-3aa85015c234\" (UID: \"a920ce34-52b6-45f6-ac27-3aa85015c234\") " Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.862077 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a920ce34-52b6-45f6-ac27-3aa85015c234-kube-api-access-gxdkf" (OuterVolumeSpecName: "kube-api-access-gxdkf") pod "a920ce34-52b6-45f6-ac27-3aa85015c234" (UID: "a920ce34-52b6-45f6-ac27-3aa85015c234"). InnerVolumeSpecName "kube-api-access-gxdkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.925808 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxdkf\" (UniqueName: \"kubernetes.io/projected/a920ce34-52b6-45f6-ac27-3aa85015c234-kube-api-access-gxdkf\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.962548 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a920ce34-52b6-45f6-ac27-3aa85015c234" (UID: "a920ce34-52b6-45f6-ac27-3aa85015c234"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.966022 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-config" (OuterVolumeSpecName: "config") pod "a920ce34-52b6-45f6-ac27-3aa85015c234" (UID: "a920ce34-52b6-45f6-ac27-3aa85015c234"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:16 crc kubenswrapper[4750]: I0214 14:16:16.978371 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a920ce34-52b6-45f6-ac27-3aa85015c234" (UID: "a920ce34-52b6-45f6-ac27-3aa85015c234"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.002611 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a920ce34-52b6-45f6-ac27-3aa85015c234" (UID: "a920ce34-52b6-45f6-ac27-3aa85015c234"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.028723 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.028759 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.028767 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.028776 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.036655 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a920ce34-52b6-45f6-ac27-3aa85015c234" (UID: "a920ce34-52b6-45f6-ac27-3aa85015c234"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.084873 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.084990 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.131027 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a920ce34-52b6-45f6-ac27-3aa85015c234-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.191747 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.192027 4750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.195605 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.237756 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.237821 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.313813 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.373278 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.373328 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.393683 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77656b9796-ff5r4"] Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.436872 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.541766 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-combined-ca-bundle\") pod \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.542183 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data\") pod \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.542278 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7skvb\" (UniqueName: \"kubernetes.io/projected/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-kube-api-access-7skvb\") pod \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.542328 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data-custom\") pod \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\" (UID: \"e82b038d-e8f2-47be-8903-eb7a0a0f70fd\") " Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.546954 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-kube-api-access-7skvb" (OuterVolumeSpecName: "kube-api-access-7skvb") pod "e82b038d-e8f2-47be-8903-eb7a0a0f70fd" (UID: "e82b038d-e8f2-47be-8903-eb7a0a0f70fd"). InnerVolumeSpecName "kube-api-access-7skvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.550867 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7skvb\" (UniqueName: \"kubernetes.io/projected/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-kube-api-access-7skvb\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.558282 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e82b038d-e8f2-47be-8903-eb7a0a0f70fd" (UID: "e82b038d-e8f2-47be-8903-eb7a0a0f70fd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.562798 4750 generic.go:334] "Generic (PLEG): container finished" podID="e82b038d-e8f2-47be-8903-eb7a0a0f70fd" containerID="2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2" exitCode=0 Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.563020 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" event={"ID":"e82b038d-e8f2-47be-8903-eb7a0a0f70fd","Type":"ContainerDied","Data":"2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2"} Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.563039 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.563065 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59cf69fd49-rvv8r" event={"ID":"e82b038d-e8f2-47be-8903-eb7a0a0f70fd","Type":"ContainerDied","Data":"0d355e550a61ebd70d05370097a6144dbaa4d16681af087e86c932ef2b734f64"} Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.563100 4750 scope.go:117] "RemoveContainer" containerID="2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.584878 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" event={"ID":"a920ce34-52b6-45f6-ac27-3aa85015c234","Type":"ContainerDied","Data":"adf2db2a871a2658587fcff668a82dfe3ff38c2b6a2275d84b0d96826fe7e108"} Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.585020 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-c54ql" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.621282 4750 scope.go:117] "RemoveContainer" containerID="7683332fffa29f3109d705e3912accc25d48ed5b7e6c73ea8b94e4fd4e459ad5" Feb 14 14:16:17 crc kubenswrapper[4750]: E0214 14:16:17.621502 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5748477f5c-8zjgh_openstack(fe4f92a9-d75d-4965-a7ad-4415676d65b6)\"" pod="openstack/heat-api-5748477f5c-8zjgh" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.637996 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77656b9796-ff5r4" event={"ID":"1e1b3da1-2a06-4b24-86e6-921864918a8e","Type":"ContainerStarted","Data":"d8787dc9c117fa3763e58750a45ac36fc2140b1c1abda414951a41bb5b6f7686"} Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.655184 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-c54ql"] Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.655516 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.686172 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-c54ql"] Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.706822 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e82b038d-e8f2-47be-8903-eb7a0a0f70fd" (UID: "e82b038d-e8f2-47be-8903-eb7a0a0f70fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.717019 4750 generic.go:334] "Generic (PLEG): container finished" podID="32b3ee0b-ff6c-43d9-88e9-cad127186847" containerID="3022a0f63b83cca6768cda62fc6ce56774fa47b22b69f693e9dd3997f25cbd90" exitCode=0 Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.717104 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69f59c97bd-6wd8f" event={"ID":"32b3ee0b-ff6c-43d9-88e9-cad127186847","Type":"ContainerDied","Data":"3022a0f63b83cca6768cda62fc6ce56774fa47b22b69f693e9dd3997f25cbd90"} Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.717208 4750 scope.go:117] "RemoveContainer" containerID="2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2" Feb 14 14:16:17 crc kubenswrapper[4750]: E0214 14:16:17.719560 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2\": container with ID starting with 2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2 not found: ID does not exist" containerID="2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.719962 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2"} err="failed to get container status \"2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2\": rpc error: code = NotFound desc = could not find container \"2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2\": container with ID starting with 2c623841554db340a28354f59badc4d93d36daae9d0958ba2d254e46b13559d2 not found: ID does not exist" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.719987 4750 scope.go:117] "RemoveContainer" containerID="003af2487d17274f859bb41798795b9cc1317e7cb38338e850cb05caa7b7e0ee" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.746914 4750 generic.go:334] "Generic (PLEG): container finished" podID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerID="59844409a78c8cef20e9660672e95c489ca8fbf8c1d0d56503a178e40b5ae2ee" exitCode=0 Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.747107 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f9f7d688-glpvb" event={"ID":"dbac059a-8e0e-46a2-857c-b2515fa2a8eb","Type":"ContainerDied","Data":"59844409a78c8cef20e9660672e95c489ca8fbf8c1d0d56503a178e40b5ae2ee"} Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.761463 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.787690 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerStarted","Data":"3fb4d25232478b96d7b2730853da2acaa2a038a3a6569dbd4e79693a4d285524"} Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.788070 4750 scope.go:117] "RemoveContainer" containerID="c1813b91b15ae4bd88b6be562abfbd868dc043197c782be1f238dcb8201ee9c8" Feb 14 14:16:17 crc kubenswrapper[4750]: E0214 14:16:17.788325 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7d54f5975d-tsf47_openstack(1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b)\"" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.791536 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data" (OuterVolumeSpecName: "config-data") pod "e82b038d-e8f2-47be-8903-eb7a0a0f70fd" (UID: "e82b038d-e8f2-47be-8903-eb7a0a0f70fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.915247 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e82b038d-e8f2-47be-8903-eb7a0a0f70fd-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.937552 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84668977c4-sql9c"] Feb 14 14:16:17 crc kubenswrapper[4750]: I0214 14:16:17.990487 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.012239 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-59cf69fd49-rvv8r"] Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.016059 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-config-data\") pod \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.016101 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-logs\") pod \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.016173 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqlxn\" (UniqueName: \"kubernetes.io/projected/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-kube-api-access-hqlxn\") pod \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.016331 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-internal-tls-certs\") pod \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.016393 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-public-tls-certs\") pod \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.016443 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-combined-ca-bundle\") pod \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.016512 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.016546 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-scripts\") pod \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\" (UID: \"dbac059a-8e0e-46a2-857c-b2515fa2a8eb\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.017829 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-logs" (OuterVolumeSpecName: "logs") pod "dbac059a-8e0e-46a2-857c-b2515fa2a8eb" (UID: "dbac059a-8e0e-46a2-857c-b2515fa2a8eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.029436 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-59cf69fd49-rvv8r"] Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.034253 4750 scope.go:117] "RemoveContainer" containerID="cc9ad5fb23a2dd0e28a7b00c510ffa2cee4038e314473889ddf997de1bbf13e2" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.034841 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-kube-api-access-hqlxn" (OuterVolumeSpecName: "kube-api-access-hqlxn") pod "dbac059a-8e0e-46a2-857c-b2515fa2a8eb" (UID: "dbac059a-8e0e-46a2-857c-b2515fa2a8eb"). InnerVolumeSpecName "kube-api-access-hqlxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.060790 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-scripts" (OuterVolumeSpecName: "scripts") pod "dbac059a-8e0e-46a2-857c-b2515fa2a8eb" (UID: "dbac059a-8e0e-46a2-857c-b2515fa2a8eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.080678 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.129053 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data\") pod \"32b3ee0b-ff6c-43d9-88e9-cad127186847\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.129137 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdtmx\" (UniqueName: \"kubernetes.io/projected/32b3ee0b-ff6c-43d9-88e9-cad127186847-kube-api-access-bdtmx\") pod \"32b3ee0b-ff6c-43d9-88e9-cad127186847\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.129182 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data-custom\") pod \"32b3ee0b-ff6c-43d9-88e9-cad127186847\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.129297 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-combined-ca-bundle\") pod \"32b3ee0b-ff6c-43d9-88e9-cad127186847\" (UID: \"32b3ee0b-ff6c-43d9-88e9-cad127186847\") " Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.130398 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqlxn\" (UniqueName: \"kubernetes.io/projected/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-kube-api-access-hqlxn\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.130423 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.130437 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.181311 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b3ee0b-ff6c-43d9-88e9-cad127186847-kube-api-access-bdtmx" (OuterVolumeSpecName: "kube-api-access-bdtmx") pod "32b3ee0b-ff6c-43d9-88e9-cad127186847" (UID: "32b3ee0b-ff6c-43d9-88e9-cad127186847"). InnerVolumeSpecName "kube-api-access-bdtmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.232247 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdtmx\" (UniqueName: \"kubernetes.io/projected/32b3ee0b-ff6c-43d9-88e9-cad127186847-kube-api-access-bdtmx\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.240275 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "32b3ee0b-ff6c-43d9-88e9-cad127186847" (UID: "32b3ee0b-ff6c-43d9-88e9-cad127186847"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.315467 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbac059a-8e0e-46a2-857c-b2515fa2a8eb" (UID: "dbac059a-8e0e-46a2-857c-b2515fa2a8eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.335183 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.335216 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.671558 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dbac059a-8e0e-46a2-857c-b2515fa2a8eb" (UID: "dbac059a-8e0e-46a2-857c-b2515fa2a8eb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.705849 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32b3ee0b-ff6c-43d9-88e9-cad127186847" (UID: "32b3ee0b-ff6c-43d9-88e9-cad127186847"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.725536 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-config-data" (OuterVolumeSpecName: "config-data") pod "dbac059a-8e0e-46a2-857c-b2515fa2a8eb" (UID: "dbac059a-8e0e-46a2-857c-b2515fa2a8eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.754457 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.754490 4750 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.754500 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.763060 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a920ce34-52b6-45f6-ac27-3aa85015c234" path="/var/lib/kubelet/pods/a920ce34-52b6-45f6-ac27-3aa85015c234/volumes" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.769749 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e82b038d-e8f2-47be-8903-eb7a0a0f70fd" path="/var/lib/kubelet/pods/e82b038d-e8f2-47be-8903-eb7a0a0f70fd/volumes" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.812104 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f9f7d688-glpvb" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.847576 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69f59c97bd-6wd8f" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.849804 4750 scope.go:117] "RemoveContainer" containerID="7683332fffa29f3109d705e3912accc25d48ed5b7e6c73ea8b94e4fd4e459ad5" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.849873 4750 scope.go:117] "RemoveContainer" containerID="c1813b91b15ae4bd88b6be562abfbd868dc043197c782be1f238dcb8201ee9c8" Feb 14 14:16:18 crc kubenswrapper[4750]: E0214 14:16:18.850055 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5748477f5c-8zjgh_openstack(fe4f92a9-d75d-4965-a7ad-4415676d65b6)\"" pod="openstack/heat-api-5748477f5c-8zjgh" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" Feb 14 14:16:18 crc kubenswrapper[4750]: E0214 14:16:18.850276 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7d54f5975d-tsf47_openstack(1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b)\"" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.926310 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-77656b9796-ff5r4" podStartSLOduration=3.926291559 podStartE2EDuration="3.926291559s" podCreationTimestamp="2026-02-14 14:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:18.874137875 +0000 UTC m=+1450.900127356" watchObservedRunningTime="2026-02-14 14:16:18.926291559 +0000 UTC m=+1450.952281040" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.941282 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data" (OuterVolumeSpecName: "config-data") pod "32b3ee0b-ff6c-43d9-88e9-cad127186847" (UID: "32b3ee0b-ff6c-43d9-88e9-cad127186847"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:18 crc kubenswrapper[4750]: I0214 14:16:18.959187 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b3ee0b-ff6c-43d9-88e9-cad127186847-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.014252 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dbac059a-8e0e-46a2-857c-b2515fa2a8eb" (UID: "dbac059a-8e0e-46a2-857c-b2515fa2a8eb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.062063 4750 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbac059a-8e0e-46a2-857c-b2515fa2a8eb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.104422 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.104464 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f9f7d688-glpvb" event={"ID":"dbac059a-8e0e-46a2-857c-b2515fa2a8eb","Type":"ContainerDied","Data":"7bd2f16b4e1c25ec72847e89a3b3801fa2ec078cce736178bc3a0bb717dc8419"} Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.104490 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.104509 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84668977c4-sql9c" event={"ID":"02f8868c-0e0f-4e75-9ef2-d74188d5fcda","Type":"ContainerStarted","Data":"6cabedd0535a9948909e6f7f34cfdbfcbfc0c86ba6b3333c9ff644ab5d2761ba"} Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.104519 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77656b9796-ff5r4" event={"ID":"1e1b3da1-2a06-4b24-86e6-921864918a8e","Type":"ContainerStarted","Data":"7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8"} Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.104528 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69f59c97bd-6wd8f" event={"ID":"32b3ee0b-ff6c-43d9-88e9-cad127186847","Type":"ContainerDied","Data":"44e61ab5279ffa180a227310152746f1d196012b244cc37c06c3b078826d82a6"} Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.104549 4750 scope.go:117] "RemoveContainer" containerID="59844409a78c8cef20e9660672e95c489ca8fbf8c1d0d56503a178e40b5ae2ee" Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.143388 4750 scope.go:117] "RemoveContainer" containerID="af54dc8bb6b112162d2bc32bd882c1f654dc53ba10caa3ffc47d38a0064e71bb" Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.160090 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6f9f7d688-glpvb"] Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.171579 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6f9f7d688-glpvb"] Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.196493 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-69f59c97bd-6wd8f"] Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.207482 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-69f59c97bd-6wd8f"] Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.212258 4750 scope.go:117] "RemoveContainer" containerID="3022a0f63b83cca6768cda62fc6ce56774fa47b22b69f693e9dd3997f25cbd90" Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.871391 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerStarted","Data":"e5151df532b61298796eaa278b77e06ff496bcf871789cf6b6eb8ac335768db6"} Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.871716 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerStarted","Data":"18639eb24017ab82d8aff1ac7c036806fc5628d373d77d0ca9cefffad580a168"} Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.874681 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84668977c4-sql9c" event={"ID":"02f8868c-0e0f-4e75-9ef2-d74188d5fcda","Type":"ContainerStarted","Data":"4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293"} Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.874804 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:19 crc kubenswrapper[4750]: I0214 14:16:19.897297 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84668977c4-sql9c" podStartSLOduration=4.897273906 podStartE2EDuration="4.897273906s" podCreationTimestamp="2026-02-14 14:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:19.892214352 +0000 UTC m=+1451.918203843" watchObservedRunningTime="2026-02-14 14:16:19.897273906 +0000 UTC m=+1451.923263387" Feb 14 14:16:20 crc kubenswrapper[4750]: I0214 14:16:20.758881 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b3ee0b-ff6c-43d9-88e9-cad127186847" path="/var/lib/kubelet/pods/32b3ee0b-ff6c-43d9-88e9-cad127186847/volumes" Feb 14 14:16:20 crc kubenswrapper[4750]: I0214 14:16:20.760023 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" path="/var/lib/kubelet/pods/dbac059a-8e0e-46a2-857c-b2515fa2a8eb/volumes" Feb 14 14:16:21 crc kubenswrapper[4750]: I0214 14:16:21.915879 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerStarted","Data":"6ff4641c5301beb769914d3b15ff2fa09b529150db490a5e4595cfc1ed99112b"} Feb 14 14:16:21 crc kubenswrapper[4750]: I0214 14:16:21.916215 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="ceilometer-central-agent" containerID="cri-o://3fb4d25232478b96d7b2730853da2acaa2a038a3a6569dbd4e79693a4d285524" gracePeriod=30 Feb 14 14:16:21 crc kubenswrapper[4750]: I0214 14:16:21.916270 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 14:16:21 crc kubenswrapper[4750]: I0214 14:16:21.916316 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="proxy-httpd" containerID="cri-o://6ff4641c5301beb769914d3b15ff2fa09b529150db490a5e4595cfc1ed99112b" gracePeriod=30 Feb 14 14:16:21 crc kubenswrapper[4750]: I0214 14:16:21.916363 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="sg-core" containerID="cri-o://e5151df532b61298796eaa278b77e06ff496bcf871789cf6b6eb8ac335768db6" gracePeriod=30 Feb 14 14:16:21 crc kubenswrapper[4750]: I0214 14:16:21.916400 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="ceilometer-notification-agent" containerID="cri-o://18639eb24017ab82d8aff1ac7c036806fc5628d373d77d0ca9cefffad580a168" gracePeriod=30 Feb 14 14:16:21 crc kubenswrapper[4750]: I0214 14:16:21.952137 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.185934904 podStartE2EDuration="7.952092261s" podCreationTimestamp="2026-02-14 14:16:14 +0000 UTC" firstStartedPulling="2026-02-14 14:16:16.242279636 +0000 UTC m=+1448.268269107" lastFinishedPulling="2026-02-14 14:16:21.008436983 +0000 UTC m=+1453.034426464" observedRunningTime="2026-02-14 14:16:21.946002107 +0000 UTC m=+1453.971991588" watchObservedRunningTime="2026-02-14 14:16:21.952092261 +0000 UTC m=+1453.978081742" Feb 14 14:16:22 crc kubenswrapper[4750]: I0214 14:16:22.932360 4750 generic.go:334] "Generic (PLEG): container finished" podID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerID="6ff4641c5301beb769914d3b15ff2fa09b529150db490a5e4595cfc1ed99112b" exitCode=0 Feb 14 14:16:22 crc kubenswrapper[4750]: I0214 14:16:22.932610 4750 generic.go:334] "Generic (PLEG): container finished" podID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerID="e5151df532b61298796eaa278b77e06ff496bcf871789cf6b6eb8ac335768db6" exitCode=2 Feb 14 14:16:22 crc kubenswrapper[4750]: I0214 14:16:22.932439 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerDied","Data":"6ff4641c5301beb769914d3b15ff2fa09b529150db490a5e4595cfc1ed99112b"} Feb 14 14:16:22 crc kubenswrapper[4750]: I0214 14:16:22.932694 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerDied","Data":"e5151df532b61298796eaa278b77e06ff496bcf871789cf6b6eb8ac335768db6"} Feb 14 14:16:22 crc kubenswrapper[4750]: I0214 14:16:22.932712 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerDied","Data":"18639eb24017ab82d8aff1ac7c036806fc5628d373d77d0ca9cefffad580a168"} Feb 14 14:16:22 crc kubenswrapper[4750]: I0214 14:16:22.932619 4750 generic.go:334] "Generic (PLEG): container finished" podID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerID="18639eb24017ab82d8aff1ac7c036806fc5628d373d77d0ca9cefffad580a168" exitCode=0 Feb 14 14:16:25 crc kubenswrapper[4750]: I0214 14:16:25.472467 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:25 crc kubenswrapper[4750]: I0214 14:16:25.626767 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9bp8d" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="registry-server" probeResult="failure" output=< Feb 14 14:16:25 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:16:25 crc kubenswrapper[4750]: > Feb 14 14:16:28 crc kubenswrapper[4750]: I0214 14:16:28.023540 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kfvr" event={"ID":"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf","Type":"ContainerStarted","Data":"5e7d67ed00dd94f46370db5e9826c4e22d836135b7ea786bd63687adea02195b"} Feb 14 14:16:28 crc kubenswrapper[4750]: I0214 14:16:28.044061 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2kfvr" podStartSLOduration=1.6583231889999999 podStartE2EDuration="21.044043653s" podCreationTimestamp="2026-02-14 14:16:07 +0000 UTC" firstStartedPulling="2026-02-14 14:16:08.132732699 +0000 UTC m=+1440.158722180" lastFinishedPulling="2026-02-14 14:16:27.518453163 +0000 UTC m=+1459.544442644" observedRunningTime="2026-02-14 14:16:28.042259142 +0000 UTC m=+1460.068248623" watchObservedRunningTime="2026-02-14 14:16:28.044043653 +0000 UTC m=+1460.070033134" Feb 14 14:16:28 crc kubenswrapper[4750]: I0214 14:16:28.504120 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:16:28 crc kubenswrapper[4750]: I0214 14:16:28.510431 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:16:28 crc kubenswrapper[4750]: I0214 14:16:28.565178 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5748477f5c-8zjgh"] Feb 14 14:16:28 crc kubenswrapper[4750]: I0214 14:16:28.675595 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7d54f5975d-tsf47"] Feb 14 14:16:28 crc kubenswrapper[4750]: I0214 14:16:28.999225 4750 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod046d0778-1fd6-4cfa-b632-4379f20af2b7"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod046d0778-1fd6-4cfa-b632-4379f20af2b7] : Timed out while waiting for systemd to remove kubepods-besteffort-pod046d0778_1fd6_4cfa_b632_4379f20af2b7.slice" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.177921 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.186735 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.302313 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99pgm\" (UniqueName: \"kubernetes.io/projected/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-kube-api-access-99pgm\") pod \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.302396 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data-custom\") pod \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.302493 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data\") pod \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.302552 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-combined-ca-bundle\") pod \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.302609 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data-custom\") pod \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.302638 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-combined-ca-bundle\") pod \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.302716 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data\") pod \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\" (UID: \"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b\") " Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.302813 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t458r\" (UniqueName: \"kubernetes.io/projected/fe4f92a9-d75d-4965-a7ad-4415676d65b6-kube-api-access-t458r\") pod \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\" (UID: \"fe4f92a9-d75d-4965-a7ad-4415676d65b6\") " Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.307311 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" (UID: "1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.310020 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe4f92a9-d75d-4965-a7ad-4415676d65b6" (UID: "fe4f92a9-d75d-4965-a7ad-4415676d65b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.310097 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4f92a9-d75d-4965-a7ad-4415676d65b6-kube-api-access-t458r" (OuterVolumeSpecName: "kube-api-access-t458r") pod "fe4f92a9-d75d-4965-a7ad-4415676d65b6" (UID: "fe4f92a9-d75d-4965-a7ad-4415676d65b6"). InnerVolumeSpecName "kube-api-access-t458r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.320104 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-kube-api-access-99pgm" (OuterVolumeSpecName: "kube-api-access-99pgm") pod "1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" (UID: "1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b"). InnerVolumeSpecName "kube-api-access-99pgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.338301 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" (UID: "1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.356192 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe4f92a9-d75d-4965-a7ad-4415676d65b6" (UID: "fe4f92a9-d75d-4965-a7ad-4415676d65b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.397422 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data" (OuterVolumeSpecName: "config-data") pod "fe4f92a9-d75d-4965-a7ad-4415676d65b6" (UID: "fe4f92a9-d75d-4965-a7ad-4415676d65b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.405550 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.405586 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.405597 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.405605 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.405614 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t458r\" (UniqueName: \"kubernetes.io/projected/fe4f92a9-d75d-4965-a7ad-4415676d65b6-kube-api-access-t458r\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.405625 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99pgm\" (UniqueName: \"kubernetes.io/projected/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-kube-api-access-99pgm\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.405633 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4f92a9-d75d-4965-a7ad-4415676d65b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.417490 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data" (OuterVolumeSpecName: "config-data") pod "1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" (UID: "1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:29 crc kubenswrapper[4750]: I0214 14:16:29.507978 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.045015 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" event={"ID":"1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b","Type":"ContainerDied","Data":"5aa36d361b52f457d19ce12d950102f00461f3762cf421b14d710bd92d184926"} Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.045067 4750 scope.go:117] "RemoveContainer" containerID="c1813b91b15ae4bd88b6be562abfbd868dc043197c782be1f238dcb8201ee9c8" Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.045211 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d54f5975d-tsf47" Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.053781 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5748477f5c-8zjgh" event={"ID":"fe4f92a9-d75d-4965-a7ad-4415676d65b6","Type":"ContainerDied","Data":"529f94cef90daac6ebb0327e5356a7ff1f5c2d7ab2925fcd868066ad1f822056"} Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.053839 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5748477f5c-8zjgh" Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.112885 4750 scope.go:117] "RemoveContainer" containerID="7683332fffa29f3109d705e3912accc25d48ed5b7e6c73ea8b94e4fd4e459ad5" Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.148816 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7d54f5975d-tsf47"] Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.164980 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7d54f5975d-tsf47"] Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.176519 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5748477f5c-8zjgh"] Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.189337 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5748477f5c-8zjgh"] Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.760609 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" path="/var/lib/kubelet/pods/1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b/volumes" Feb 14 14:16:30 crc kubenswrapper[4750]: I0214 14:16:30.761709 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" path="/var/lib/kubelet/pods/fe4f92a9-d75d-4965-a7ad-4415676d65b6/volumes" Feb 14 14:16:32 crc kubenswrapper[4750]: I0214 14:16:32.254273 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:16:32 crc kubenswrapper[4750]: I0214 14:16:32.305332 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5794bb6d69-xd29g"] Feb 14 14:16:32 crc kubenswrapper[4750]: I0214 14:16:32.305566 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5794bb6d69-xd29g" podUID="3d9c8c1e-f288-4429-b701-acc9dd04808e" containerName="heat-engine" containerID="cri-o://719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431" gracePeriod=60 Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.104751 4750 generic.go:334] "Generic (PLEG): container finished" podID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerID="3fb4d25232478b96d7b2730853da2acaa2a038a3a6569dbd4e79693a4d285524" exitCode=0 Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.104820 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerDied","Data":"3fb4d25232478b96d7b2730853da2acaa2a038a3a6569dbd4e79693a4d285524"} Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.493154 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.529552 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-sg-core-conf-yaml\") pod \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.529714 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-run-httpd\") pod \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.529803 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-combined-ca-bundle\") pod \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.529885 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-config-data\") pod \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.529914 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-ceilometer-tls-certs\") pod \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.530026 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-log-httpd\") pod \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.530075 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz2xx\" (UniqueName: \"kubernetes.io/projected/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-kube-api-access-zz2xx\") pod \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.530106 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-scripts\") pod \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\" (UID: \"accdfd90-00e8-4856-9855-3d4bcf3d4d5b\") " Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.530270 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "accdfd90-00e8-4856-9855-3d4bcf3d4d5b" (UID: "accdfd90-00e8-4856-9855-3d4bcf3d4d5b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.530759 4750 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.530815 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "accdfd90-00e8-4856-9855-3d4bcf3d4d5b" (UID: "accdfd90-00e8-4856-9855-3d4bcf3d4d5b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.536335 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-kube-api-access-zz2xx" (OuterVolumeSpecName: "kube-api-access-zz2xx") pod "accdfd90-00e8-4856-9855-3d4bcf3d4d5b" (UID: "accdfd90-00e8-4856-9855-3d4bcf3d4d5b"). InnerVolumeSpecName "kube-api-access-zz2xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.542020 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-scripts" (OuterVolumeSpecName: "scripts") pod "accdfd90-00e8-4856-9855-3d4bcf3d4d5b" (UID: "accdfd90-00e8-4856-9855-3d4bcf3d4d5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.577730 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "accdfd90-00e8-4856-9855-3d4bcf3d4d5b" (UID: "accdfd90-00e8-4856-9855-3d4bcf3d4d5b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.632713 4750 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.632748 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz2xx\" (UniqueName: \"kubernetes.io/projected/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-kube-api-access-zz2xx\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.632760 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.632768 4750 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.666956 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "accdfd90-00e8-4856-9855-3d4bcf3d4d5b" (UID: "accdfd90-00e8-4856-9855-3d4bcf3d4d5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.672971 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "accdfd90-00e8-4856-9855-3d4bcf3d4d5b" (UID: "accdfd90-00e8-4856-9855-3d4bcf3d4d5b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.689086 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-config-data" (OuterVolumeSpecName: "config-data") pod "accdfd90-00e8-4856-9855-3d4bcf3d4d5b" (UID: "accdfd90-00e8-4856-9855-3d4bcf3d4d5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.735624 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.735683 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:34 crc kubenswrapper[4750]: I0214 14:16:34.735693 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdfd90-00e8-4856-9855-3d4bcf3d4d5b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.118912 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"accdfd90-00e8-4856-9855-3d4bcf3d4d5b","Type":"ContainerDied","Data":"302acbb181d47a731cf6d9990ccdac18ab5dc76897054e6bc33309837e805c0f"} Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.118968 4750 scope.go:117] "RemoveContainer" containerID="6ff4641c5301beb769914d3b15ff2fa09b529150db490a5e4595cfc1ed99112b" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.119151 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.143652 4750 scope.go:117] "RemoveContainer" containerID="e5151df532b61298796eaa278b77e06ff496bcf871789cf6b6eb8ac335768db6" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.146374 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.157375 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.166279 4750 scope.go:117] "RemoveContainer" containerID="18639eb24017ab82d8aff1ac7c036806fc5628d373d77d0ca9cefffad580a168" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.182944 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183690 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="ceilometer-central-agent" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183713 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="ceilometer-central-agent" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183731 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" containerName="heat-api" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183738 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" containerName="heat-api" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183750 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" containerName="heat-cfnapi" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183757 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" containerName="heat-cfnapi" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183776 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e82b038d-e8f2-47be-8903-eb7a0a0f70fd" containerName="heat-cfnapi" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183786 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e82b038d-e8f2-47be-8903-eb7a0a0f70fd" containerName="heat-cfnapi" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183797 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" containerName="heat-cfnapi" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183804 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" containerName="heat-cfnapi" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183816 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a920ce34-52b6-45f6-ac27-3aa85015c234" containerName="init" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183824 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a920ce34-52b6-45f6-ac27-3aa85015c234" containerName="init" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183835 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a920ce34-52b6-45f6-ac27-3aa85015c234" containerName="dnsmasq-dns" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183842 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a920ce34-52b6-45f6-ac27-3aa85015c234" containerName="dnsmasq-dns" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183851 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="proxy-httpd" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183858 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="proxy-httpd" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183873 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b3ee0b-ff6c-43d9-88e9-cad127186847" containerName="heat-api" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183881 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b3ee0b-ff6c-43d9-88e9-cad127186847" containerName="heat-api" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183897 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" containerName="heat-api" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183913 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" containerName="heat-api" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183922 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerName="placement-log" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183927 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerName="placement-log" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183951 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="sg-core" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183957 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="sg-core" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183968 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="ceilometer-notification-agent" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183975 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="ceilometer-notification-agent" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.183985 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerName="placement-api" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.183993 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerName="placement-api" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.184217 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerName="placement-api" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.184984 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" containerName="heat-cfnapi" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185011 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" containerName="heat-api" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185023 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e82b038d-e8f2-47be-8903-eb7a0a0f70fd" containerName="heat-cfnapi" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185032 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="sg-core" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185038 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="proxy-httpd" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185046 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4f92a9-d75d-4965-a7ad-4415676d65b6" containerName="heat-api" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185063 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b3ee0b-ff6c-43d9-88e9-cad127186847" containerName="heat-api" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185075 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="ceilometer-central-agent" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185086 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" containerName="ceilometer-notification-agent" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185101 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbac059a-8e0e-46a2-857c-b2515fa2a8eb" containerName="placement-log" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185125 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="a920ce34-52b6-45f6-ac27-3aa85015c234" containerName="dnsmasq-dns" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.185568 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7d6eae-5f6c-4487-9b8a-cf49b63b7c7b" containerName="heat-cfnapi" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.187271 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.189850 4750 scope.go:117] "RemoveContainer" containerID="3fb4d25232478b96d7b2730853da2acaa2a038a3a6569dbd4e79693a4d285524" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.190053 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.191685 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.195549 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.214075 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.247010 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.247076 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-run-httpd\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.247107 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.247146 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-log-httpd\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.247164 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-scripts\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.247252 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-config-data\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.247279 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchxr\" (UniqueName: \"kubernetes.io/projected/3e1cbf61-1b23-417d-8b8d-b96591ead9de-kube-api-access-xchxr\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.247304 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.352413 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.352491 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-run-httpd\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.352527 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.352565 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-log-httpd\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.352590 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-scripts\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.352700 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-config-data\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.352736 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchxr\" (UniqueName: \"kubernetes.io/projected/3e1cbf61-1b23-417d-8b8d-b96591ead9de-kube-api-access-xchxr\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.352771 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.353564 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-log-httpd\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.354879 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-run-httpd\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.369033 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.369199 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-config-data\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.370095 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.371232 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.382134 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-scripts\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.408776 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchxr\" (UniqueName: \"kubernetes.io/projected/3e1cbf61-1b23-417d-8b8d-b96591ead9de-kube-api-access-xchxr\") pod \"ceilometer-0\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.430632 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.437853 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.456207 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 14:16:35 crc kubenswrapper[4750]: E0214 14:16:35.456270 4750 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5794bb6d69-xd29g" podUID="3d9c8c1e-f288-4429-b701-acc9dd04808e" containerName="heat-engine" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.514966 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.654144 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9bp8d" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="registry-server" probeResult="failure" output=< Feb 14 14:16:35 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:16:35 crc kubenswrapper[4750]: > Feb 14 14:16:35 crc kubenswrapper[4750]: I0214 14:16:35.753926 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:36 crc kubenswrapper[4750]: I0214 14:16:36.077679 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:36 crc kubenswrapper[4750]: I0214 14:16:36.130301 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerStarted","Data":"f5afd16665320687c395619d2a823321028b9718fefa43c1f75dc2f4ba835355"} Feb 14 14:16:36 crc kubenswrapper[4750]: I0214 14:16:36.754549 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="accdfd90-00e8-4856-9855-3d4bcf3d4d5b" path="/var/lib/kubelet/pods/accdfd90-00e8-4856-9855-3d4bcf3d4d5b/volumes" Feb 14 14:16:37 crc kubenswrapper[4750]: I0214 14:16:37.147195 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerStarted","Data":"3f76a1b3a8e8d7e7a44d16ce0f31ad6e29df1e45ea071baa5cfbba1c69a7d0e1"} Feb 14 14:16:38 crc kubenswrapper[4750]: I0214 14:16:38.162147 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerStarted","Data":"9835587dc7090c813b4545e2f9c1067cfb19c6c1b48d701e6db89408cb57810a"} Feb 14 14:16:38 crc kubenswrapper[4750]: I0214 14:16:38.162687 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerStarted","Data":"ad66dcda8bcf5d0234c63778f1e07ada0181e718675e20ed00a1e6f2b7de122b"} Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.191679 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerStarted","Data":"60b65e5a4a6c9b76efe6a12dd70b2c47cc3dc4ba98fadff31f4a2d1ab6492dfc"} Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.191986 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="ceilometer-central-agent" containerID="cri-o://3f76a1b3a8e8d7e7a44d16ce0f31ad6e29df1e45ea071baa5cfbba1c69a7d0e1" gracePeriod=30 Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.192262 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.192440 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="proxy-httpd" containerID="cri-o://60b65e5a4a6c9b76efe6a12dd70b2c47cc3dc4ba98fadff31f4a2d1ab6492dfc" gracePeriod=30 Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.192525 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="ceilometer-notification-agent" containerID="cri-o://ad66dcda8bcf5d0234c63778f1e07ada0181e718675e20ed00a1e6f2b7de122b" gracePeriod=30 Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.192567 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="sg-core" containerID="cri-o://9835587dc7090c813b4545e2f9c1067cfb19c6c1b48d701e6db89408cb57810a" gracePeriod=30 Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.200057 4750 generic.go:334] "Generic (PLEG): container finished" podID="d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf" containerID="5e7d67ed00dd94f46370db5e9826c4e22d836135b7ea786bd63687adea02195b" exitCode=0 Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.200150 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kfvr" event={"ID":"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf","Type":"ContainerDied","Data":"5e7d67ed00dd94f46370db5e9826c4e22d836135b7ea786bd63687adea02195b"} Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.201968 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.204778 4750 generic.go:334] "Generic (PLEG): container finished" podID="3d9c8c1e-f288-4429-b701-acc9dd04808e" containerID="719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431" exitCode=0 Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.204819 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5794bb6d69-xd29g" event={"ID":"3d9c8c1e-f288-4429-b701-acc9dd04808e","Type":"ContainerDied","Data":"719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431"} Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.204837 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5794bb6d69-xd29g" event={"ID":"3d9c8c1e-f288-4429-b701-acc9dd04808e","Type":"ContainerDied","Data":"2eb69e2cf75c16fd0bf91a4ae7efce69e20fb9b7e86d230876632020f1348861"} Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.204852 4750 scope.go:117] "RemoveContainer" containerID="719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.216310 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.753397951 podStartE2EDuration="5.216292293s" podCreationTimestamp="2026-02-14 14:16:35 +0000 UTC" firstStartedPulling="2026-02-14 14:16:36.086207001 +0000 UTC m=+1468.112196482" lastFinishedPulling="2026-02-14 14:16:39.549101343 +0000 UTC m=+1471.575090824" observedRunningTime="2026-02-14 14:16:40.212101204 +0000 UTC m=+1472.238090685" watchObservedRunningTime="2026-02-14 14:16:40.216292293 +0000 UTC m=+1472.242281764" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.274222 4750 scope.go:117] "RemoveContainer" containerID="719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431" Feb 14 14:16:40 crc kubenswrapper[4750]: E0214 14:16:40.276589 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431\": container with ID starting with 719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431 not found: ID does not exist" containerID="719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.276641 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431"} err="failed to get container status \"719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431\": rpc error: code = NotFound desc = could not find container \"719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431\": container with ID starting with 719b6dfb29c72d21eb2940ca69580a39430b338d097639734e19b883bb6b7431 not found: ID does not exist" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.369236 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzsjd\" (UniqueName: \"kubernetes.io/projected/3d9c8c1e-f288-4429-b701-acc9dd04808e-kube-api-access-nzsjd\") pod \"3d9c8c1e-f288-4429-b701-acc9dd04808e\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.369311 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data\") pod \"3d9c8c1e-f288-4429-b701-acc9dd04808e\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.369397 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-combined-ca-bundle\") pod \"3d9c8c1e-f288-4429-b701-acc9dd04808e\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.369481 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data-custom\") pod \"3d9c8c1e-f288-4429-b701-acc9dd04808e\" (UID: \"3d9c8c1e-f288-4429-b701-acc9dd04808e\") " Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.374679 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9c8c1e-f288-4429-b701-acc9dd04808e-kube-api-access-nzsjd" (OuterVolumeSpecName: "kube-api-access-nzsjd") pod "3d9c8c1e-f288-4429-b701-acc9dd04808e" (UID: "3d9c8c1e-f288-4429-b701-acc9dd04808e"). InnerVolumeSpecName "kube-api-access-nzsjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.375041 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3d9c8c1e-f288-4429-b701-acc9dd04808e" (UID: "3d9c8c1e-f288-4429-b701-acc9dd04808e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.410393 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d9c8c1e-f288-4429-b701-acc9dd04808e" (UID: "3d9c8c1e-f288-4429-b701-acc9dd04808e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.451033 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data" (OuterVolumeSpecName: "config-data") pod "3d9c8c1e-f288-4429-b701-acc9dd04808e" (UID: "3d9c8c1e-f288-4429-b701-acc9dd04808e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.473661 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.473690 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.473702 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzsjd\" (UniqueName: \"kubernetes.io/projected/3d9c8c1e-f288-4429-b701-acc9dd04808e-kube-api-access-nzsjd\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:40 crc kubenswrapper[4750]: I0214 14:16:40.473716 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9c8c1e-f288-4429-b701-acc9dd04808e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.218035 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5794bb6d69-xd29g" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.221792 4750 generic.go:334] "Generic (PLEG): container finished" podID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerID="60b65e5a4a6c9b76efe6a12dd70b2c47cc3dc4ba98fadff31f4a2d1ab6492dfc" exitCode=0 Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.221831 4750 generic.go:334] "Generic (PLEG): container finished" podID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerID="9835587dc7090c813b4545e2f9c1067cfb19c6c1b48d701e6db89408cb57810a" exitCode=2 Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.221842 4750 generic.go:334] "Generic (PLEG): container finished" podID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerID="ad66dcda8bcf5d0234c63778f1e07ada0181e718675e20ed00a1e6f2b7de122b" exitCode=0 Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.221899 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerDied","Data":"60b65e5a4a6c9b76efe6a12dd70b2c47cc3dc4ba98fadff31f4a2d1ab6492dfc"} Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.221985 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerDied","Data":"9835587dc7090c813b4545e2f9c1067cfb19c6c1b48d701e6db89408cb57810a"} Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.222000 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerDied","Data":"ad66dcda8bcf5d0234c63778f1e07ada0181e718675e20ed00a1e6f2b7de122b"} Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.247920 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5794bb6d69-xd29g"] Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.260770 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5794bb6d69-xd29g"] Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.632569 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.805782 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-config-data\") pod \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.806025 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwm6s\" (UniqueName: \"kubernetes.io/projected/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-kube-api-access-wwm6s\") pod \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.806138 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-combined-ca-bundle\") pod \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.806224 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-scripts\") pod \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\" (UID: \"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf\") " Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.819233 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-scripts" (OuterVolumeSpecName: "scripts") pod "d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf" (UID: "d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.825306 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-kube-api-access-wwm6s" (OuterVolumeSpecName: "kube-api-access-wwm6s") pod "d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf" (UID: "d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf"). InnerVolumeSpecName "kube-api-access-wwm6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.839258 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf" (UID: "d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.860580 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-config-data" (OuterVolumeSpecName: "config-data") pod "d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf" (UID: "d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.908961 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.909002 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwm6s\" (UniqueName: \"kubernetes.io/projected/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-kube-api-access-wwm6s\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.909012 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:41 crc kubenswrapper[4750]: I0214 14:16:41.909020 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.244469 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kfvr" event={"ID":"d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf","Type":"ContainerDied","Data":"7b34d3a80dfeefcddcdec3be6ae78a2a72be812579faef81757d0d1c69b9ef8c"} Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.244510 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b34d3a80dfeefcddcdec3be6ae78a2a72be812579faef81757d0d1c69b9ef8c" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.244569 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kfvr" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.348994 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 14:16:42 crc kubenswrapper[4750]: E0214 14:16:42.351537 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9c8c1e-f288-4429-b701-acc9dd04808e" containerName="heat-engine" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.351565 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9c8c1e-f288-4429-b701-acc9dd04808e" containerName="heat-engine" Feb 14 14:16:42 crc kubenswrapper[4750]: E0214 14:16:42.351595 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf" containerName="nova-cell0-conductor-db-sync" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.351605 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf" containerName="nova-cell0-conductor-db-sync" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.351887 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf" containerName="nova-cell0-conductor-db-sync" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.351919 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9c8c1e-f288-4429-b701-acc9dd04808e" containerName="heat-engine" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.356216 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.358488 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-psvbd" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.362652 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.363509 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.522228 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bccfd8-7ab4-4daa-b272-188438293cf7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76bccfd8-7ab4-4daa-b272-188438293cf7\") " pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.522292 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bccfd8-7ab4-4daa-b272-188438293cf7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76bccfd8-7ab4-4daa-b272-188438293cf7\") " pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.522319 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v96h7\" (UniqueName: \"kubernetes.io/projected/76bccfd8-7ab4-4daa-b272-188438293cf7-kube-api-access-v96h7\") pod \"nova-cell0-conductor-0\" (UID: \"76bccfd8-7ab4-4daa-b272-188438293cf7\") " pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.624943 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bccfd8-7ab4-4daa-b272-188438293cf7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76bccfd8-7ab4-4daa-b272-188438293cf7\") " pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.624997 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v96h7\" (UniqueName: \"kubernetes.io/projected/76bccfd8-7ab4-4daa-b272-188438293cf7-kube-api-access-v96h7\") pod \"nova-cell0-conductor-0\" (UID: \"76bccfd8-7ab4-4daa-b272-188438293cf7\") " pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.625453 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bccfd8-7ab4-4daa-b272-188438293cf7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76bccfd8-7ab4-4daa-b272-188438293cf7\") " pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.631071 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bccfd8-7ab4-4daa-b272-188438293cf7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76bccfd8-7ab4-4daa-b272-188438293cf7\") " pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.631638 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bccfd8-7ab4-4daa-b272-188438293cf7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76bccfd8-7ab4-4daa-b272-188438293cf7\") " pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.644210 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v96h7\" (UniqueName: \"kubernetes.io/projected/76bccfd8-7ab4-4daa-b272-188438293cf7-kube-api-access-v96h7\") pod \"nova-cell0-conductor-0\" (UID: \"76bccfd8-7ab4-4daa-b272-188438293cf7\") " pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.687896 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:42 crc kubenswrapper[4750]: I0214 14:16:42.764031 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9c8c1e-f288-4429-b701-acc9dd04808e" path="/var/lib/kubelet/pods/3d9c8c1e-f288-4429-b701-acc9dd04808e/volumes" Feb 14 14:16:43 crc kubenswrapper[4750]: W0214 14:16:43.331255 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76bccfd8_7ab4_4daa_b272_188438293cf7.slice/crio-f3bb09752e5f23596a102ef771f73d91b3d7272b3caabfdf86d2c9681f362c4e WatchSource:0}: Error finding container f3bb09752e5f23596a102ef771f73d91b3d7272b3caabfdf86d2c9681f362c4e: Status 404 returned error can't find the container with id f3bb09752e5f23596a102ef771f73d91b3d7272b3caabfdf86d2c9681f362c4e Feb 14 14:16:43 crc kubenswrapper[4750]: I0214 14:16:43.332788 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 14:16:44 crc kubenswrapper[4750]: I0214 14:16:44.277858 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76bccfd8-7ab4-4daa-b272-188438293cf7","Type":"ContainerStarted","Data":"9e3c4fc429c4e63755ced5a3f56c7b2f279723976875478ac79bf3014c3cf0b5"} Feb 14 14:16:44 crc kubenswrapper[4750]: I0214 14:16:44.278178 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76bccfd8-7ab4-4daa-b272-188438293cf7","Type":"ContainerStarted","Data":"f3bb09752e5f23596a102ef771f73d91b3d7272b3caabfdf86d2c9681f362c4e"} Feb 14 14:16:44 crc kubenswrapper[4750]: I0214 14:16:44.278790 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:44 crc kubenswrapper[4750]: I0214 14:16:44.313320 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.313300994 podStartE2EDuration="2.313300994s" podCreationTimestamp="2026-02-14 14:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:44.31034214 +0000 UTC m=+1476.336331641" watchObservedRunningTime="2026-02-14 14:16:44.313300994 +0000 UTC m=+1476.339290465" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.295805 4750 generic.go:334] "Generic (PLEG): container finished" podID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerID="3f76a1b3a8e8d7e7a44d16ce0f31ad6e29df1e45ea071baa5cfbba1c69a7d0e1" exitCode=0 Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.296189 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerDied","Data":"3f76a1b3a8e8d7e7a44d16ce0f31ad6e29df1e45ea071baa5cfbba1c69a7d0e1"} Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.610010 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9bp8d" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="registry-server" probeResult="failure" output=< Feb 14 14:16:45 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:16:45 crc kubenswrapper[4750]: > Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.656246 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.813334 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-ceilometer-tls-certs\") pod \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.813403 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-run-httpd\") pod \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.813562 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-combined-ca-bundle\") pod \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.813605 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-log-httpd\") pod \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.813654 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-sg-core-conf-yaml\") pod \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.813700 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-scripts\") pod \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.813729 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xchxr\" (UniqueName: \"kubernetes.io/projected/3e1cbf61-1b23-417d-8b8d-b96591ead9de-kube-api-access-xchxr\") pod \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.813748 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-config-data\") pod \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\" (UID: \"3e1cbf61-1b23-417d-8b8d-b96591ead9de\") " Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.814253 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3e1cbf61-1b23-417d-8b8d-b96591ead9de" (UID: "3e1cbf61-1b23-417d-8b8d-b96591ead9de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.814452 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3e1cbf61-1b23-417d-8b8d-b96591ead9de" (UID: "3e1cbf61-1b23-417d-8b8d-b96591ead9de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.815065 4750 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.815145 4750 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1cbf61-1b23-417d-8b8d-b96591ead9de-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.833324 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-scripts" (OuterVolumeSpecName: "scripts") pod "3e1cbf61-1b23-417d-8b8d-b96591ead9de" (UID: "3e1cbf61-1b23-417d-8b8d-b96591ead9de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.836388 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1cbf61-1b23-417d-8b8d-b96591ead9de-kube-api-access-xchxr" (OuterVolumeSpecName: "kube-api-access-xchxr") pod "3e1cbf61-1b23-417d-8b8d-b96591ead9de" (UID: "3e1cbf61-1b23-417d-8b8d-b96591ead9de"). InnerVolumeSpecName "kube-api-access-xchxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.850317 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3e1cbf61-1b23-417d-8b8d-b96591ead9de" (UID: "3e1cbf61-1b23-417d-8b8d-b96591ead9de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.911171 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3e1cbf61-1b23-417d-8b8d-b96591ead9de" (UID: "3e1cbf61-1b23-417d-8b8d-b96591ead9de"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.918083 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.918126 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xchxr\" (UniqueName: \"kubernetes.io/projected/3e1cbf61-1b23-417d-8b8d-b96591ead9de-kube-api-access-xchxr\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.918137 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.918145 4750 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.924301 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e1cbf61-1b23-417d-8b8d-b96591ead9de" (UID: "3e1cbf61-1b23-417d-8b8d-b96591ead9de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:45 crc kubenswrapper[4750]: I0214 14:16:45.947559 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-config-data" (OuterVolumeSpecName: "config-data") pod "3e1cbf61-1b23-417d-8b8d-b96591ead9de" (UID: "3e1cbf61-1b23-417d-8b8d-b96591ead9de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.020393 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.020431 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1cbf61-1b23-417d-8b8d-b96591ead9de-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.309935 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e1cbf61-1b23-417d-8b8d-b96591ead9de","Type":"ContainerDied","Data":"f5afd16665320687c395619d2a823321028b9718fefa43c1f75dc2f4ba835355"} Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.309988 4750 scope.go:117] "RemoveContainer" containerID="60b65e5a4a6c9b76efe6a12dd70b2c47cc3dc4ba98fadff31f4a2d1ab6492dfc" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.310137 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.338391 4750 scope.go:117] "RemoveContainer" containerID="9835587dc7090c813b4545e2f9c1067cfb19c6c1b48d701e6db89408cb57810a" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.347249 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.362053 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.366959 4750 scope.go:117] "RemoveContainer" containerID="ad66dcda8bcf5d0234c63778f1e07ada0181e718675e20ed00a1e6f2b7de122b" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.375467 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:46 crc kubenswrapper[4750]: E0214 14:16:46.375916 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="proxy-httpd" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.375933 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="proxy-httpd" Feb 14 14:16:46 crc kubenswrapper[4750]: E0214 14:16:46.375951 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="ceilometer-central-agent" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.375957 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="ceilometer-central-agent" Feb 14 14:16:46 crc kubenswrapper[4750]: E0214 14:16:46.375974 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="sg-core" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.375981 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="sg-core" Feb 14 14:16:46 crc kubenswrapper[4750]: E0214 14:16:46.376013 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="ceilometer-notification-agent" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.376020 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="ceilometer-notification-agent" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.376229 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="proxy-httpd" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.376255 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="ceilometer-central-agent" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.376268 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="sg-core" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.376282 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" containerName="ceilometer-notification-agent" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.378293 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.385003 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.385206 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.385408 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.389051 4750 scope.go:117] "RemoveContainer" containerID="3f76a1b3a8e8d7e7a44d16ce0f31ad6e29df1e45ea071baa5cfbba1c69a7d0e1" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.410189 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.535657 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.536762 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-log-httpd\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.537400 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-run-httpd\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.537865 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-config-data\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.537970 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdtj\" (UniqueName: \"kubernetes.io/projected/2096d84a-1034-4cbc-8df6-0d4df0161fc3-kube-api-access-tvdtj\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.538073 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.538285 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.538417 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-scripts\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.640497 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-config-data\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.640558 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvdtj\" (UniqueName: \"kubernetes.io/projected/2096d84a-1034-4cbc-8df6-0d4df0161fc3-kube-api-access-tvdtj\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.640601 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.640646 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.640662 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-scripts\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.640703 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.640761 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-log-httpd\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.640786 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-run-httpd\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.641332 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-run-httpd\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.642249 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-log-httpd\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.646575 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-scripts\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.647160 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.647930 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-config-data\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.648197 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.650181 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.659714 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvdtj\" (UniqueName: \"kubernetes.io/projected/2096d84a-1034-4cbc-8df6-0d4df0161fc3-kube-api-access-tvdtj\") pod \"ceilometer-0\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.708188 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:46 crc kubenswrapper[4750]: I0214 14:16:46.759700 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e1cbf61-1b23-417d-8b8d-b96591ead9de" path="/var/lib/kubelet/pods/3e1cbf61-1b23-417d-8b8d-b96591ead9de/volumes" Feb 14 14:16:47 crc kubenswrapper[4750]: W0214 14:16:47.221398 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2096d84a_1034_4cbc_8df6_0d4df0161fc3.slice/crio-2579add2d179d9e1d5b598bda89fd3193daf8250bc8c6cff3b4731ba708e81d8 WatchSource:0}: Error finding container 2579add2d179d9e1d5b598bda89fd3193daf8250bc8c6cff3b4731ba708e81d8: Status 404 returned error can't find the container with id 2579add2d179d9e1d5b598bda89fd3193daf8250bc8c6cff3b4731ba708e81d8 Feb 14 14:16:47 crc kubenswrapper[4750]: I0214 14:16:47.223059 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:47 crc kubenswrapper[4750]: I0214 14:16:47.321516 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerStarted","Data":"2579add2d179d9e1d5b598bda89fd3193daf8250bc8c6cff3b4731ba708e81d8"} Feb 14 14:16:48 crc kubenswrapper[4750]: I0214 14:16:48.336541 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerStarted","Data":"45a6f95e6656acaddf988afdf64ee79bb28a1cd751acf42d4f0299879c6e05b5"} Feb 14 14:16:48 crc kubenswrapper[4750]: I0214 14:16:48.441197 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:16:49 crc kubenswrapper[4750]: I0214 14:16:49.353534 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerStarted","Data":"62b13efd7901f4ac9dcc84a1738c17073a8b6c67679e6cf87d993877d1089fd7"} Feb 14 14:16:49 crc kubenswrapper[4750]: I0214 14:16:49.354221 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerStarted","Data":"9986d757abda1880af027b07a03a7891fe24f5f6a0846c7107babb53daed340b"} Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.141187 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-ggdkj"] Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.143465 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.156465 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-5c3d-account-create-update-gc5rf"] Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.158742 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.171242 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-ggdkj"] Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.172912 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.180685 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-5c3d-account-create-update-gc5rf"] Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.236692 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc6646-1544-4d25-bcc8-3b269f31b74b-operator-scripts\") pod \"aodh-db-create-ggdkj\" (UID: \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\") " pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.236838 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjgng\" (UniqueName: \"kubernetes.io/projected/f6dc6646-1544-4d25-bcc8-3b269f31b74b-kube-api-access-kjgng\") pod \"aodh-db-create-ggdkj\" (UID: \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\") " pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.338928 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2955f853-8f28-4231-8d84-ed81bb9c787e-operator-scripts\") pod \"aodh-5c3d-account-create-update-gc5rf\" (UID: \"2955f853-8f28-4231-8d84-ed81bb9c787e\") " pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.338989 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc6646-1544-4d25-bcc8-3b269f31b74b-operator-scripts\") pod \"aodh-db-create-ggdkj\" (UID: \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\") " pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.339057 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnw79\" (UniqueName: \"kubernetes.io/projected/2955f853-8f28-4231-8d84-ed81bb9c787e-kube-api-access-qnw79\") pod \"aodh-5c3d-account-create-update-gc5rf\" (UID: \"2955f853-8f28-4231-8d84-ed81bb9c787e\") " pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.339255 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjgng\" (UniqueName: \"kubernetes.io/projected/f6dc6646-1544-4d25-bcc8-3b269f31b74b-kube-api-access-kjgng\") pod \"aodh-db-create-ggdkj\" (UID: \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\") " pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.339934 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc6646-1544-4d25-bcc8-3b269f31b74b-operator-scripts\") pod \"aodh-db-create-ggdkj\" (UID: \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\") " pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.371613 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjgng\" (UniqueName: \"kubernetes.io/projected/f6dc6646-1544-4d25-bcc8-3b269f31b74b-kube-api-access-kjgng\") pod \"aodh-db-create-ggdkj\" (UID: \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\") " pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.441227 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnw79\" (UniqueName: \"kubernetes.io/projected/2955f853-8f28-4231-8d84-ed81bb9c787e-kube-api-access-qnw79\") pod \"aodh-5c3d-account-create-update-gc5rf\" (UID: \"2955f853-8f28-4231-8d84-ed81bb9c787e\") " pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.441431 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2955f853-8f28-4231-8d84-ed81bb9c787e-operator-scripts\") pod \"aodh-5c3d-account-create-update-gc5rf\" (UID: \"2955f853-8f28-4231-8d84-ed81bb9c787e\") " pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.442230 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2955f853-8f28-4231-8d84-ed81bb9c787e-operator-scripts\") pod \"aodh-5c3d-account-create-update-gc5rf\" (UID: \"2955f853-8f28-4231-8d84-ed81bb9c787e\") " pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.457729 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnw79\" (UniqueName: \"kubernetes.io/projected/2955f853-8f28-4231-8d84-ed81bb9c787e-kube-api-access-qnw79\") pod \"aodh-5c3d-account-create-update-gc5rf\" (UID: \"2955f853-8f28-4231-8d84-ed81bb9c787e\") " pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.483492 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:50 crc kubenswrapper[4750]: I0214 14:16:50.494891 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.103598 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-ggdkj"] Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.258970 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-5c3d-account-create-update-gc5rf"] Feb 14 14:16:51 crc kubenswrapper[4750]: W0214 14:16:51.264855 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2955f853_8f28_4231_8d84_ed81bb9c787e.slice/crio-89fb4271f402b1d4c31522a5d13f0f318ea9d9d19047b628a0076ee4eae53da8 WatchSource:0}: Error finding container 89fb4271f402b1d4c31522a5d13f0f318ea9d9d19047b628a0076ee4eae53da8: Status 404 returned error can't find the container with id 89fb4271f402b1d4c31522a5d13f0f318ea9d9d19047b628a0076ee4eae53da8 Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.379926 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-5c3d-account-create-update-gc5rf" event={"ID":"2955f853-8f28-4231-8d84-ed81bb9c787e","Type":"ContainerStarted","Data":"89fb4271f402b1d4c31522a5d13f0f318ea9d9d19047b628a0076ee4eae53da8"} Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.386376 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerStarted","Data":"c1618f64fa12d7622caa3d7a0e228fc5e59f5bfcf25d88609f6e6cac8e9f12ce"} Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.386726 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="ceilometer-central-agent" containerID="cri-o://45a6f95e6656acaddf988afdf64ee79bb28a1cd751acf42d4f0299879c6e05b5" gracePeriod=30 Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.387096 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.387507 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="proxy-httpd" containerID="cri-o://c1618f64fa12d7622caa3d7a0e228fc5e59f5bfcf25d88609f6e6cac8e9f12ce" gracePeriod=30 Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.387631 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="sg-core" containerID="cri-o://62b13efd7901f4ac9dcc84a1738c17073a8b6c67679e6cf87d993877d1089fd7" gracePeriod=30 Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.387757 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="ceilometer-notification-agent" containerID="cri-o://9986d757abda1880af027b07a03a7891fe24f5f6a0846c7107babb53daed340b" gracePeriod=30 Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.396406 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ggdkj" event={"ID":"f6dc6646-1544-4d25-bcc8-3b269f31b74b","Type":"ContainerStarted","Data":"516678059df0b161c311edded47b210243fc0cd95e08650ebdaaaa9803446bec"} Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.396592 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ggdkj" event={"ID":"f6dc6646-1544-4d25-bcc8-3b269f31b74b","Type":"ContainerStarted","Data":"9031b5a7eef9d24eac3e577739fac1a8420f1fbf3fddba5875427ac9bae7f9c9"} Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.435884 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.106754285 podStartE2EDuration="5.435861409s" podCreationTimestamp="2026-02-14 14:16:46 +0000 UTC" firstStartedPulling="2026-02-14 14:16:47.223275639 +0000 UTC m=+1479.249265120" lastFinishedPulling="2026-02-14 14:16:50.552382763 +0000 UTC m=+1482.578372244" observedRunningTime="2026-02-14 14:16:51.424926058 +0000 UTC m=+1483.450915539" watchObservedRunningTime="2026-02-14 14:16:51.435861409 +0000 UTC m=+1483.461850890" Feb 14 14:16:51 crc kubenswrapper[4750]: I0214 14:16:51.448064 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-ggdkj" podStartSLOduration=1.448046126 podStartE2EDuration="1.448046126s" podCreationTimestamp="2026-02-14 14:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:51.447334256 +0000 UTC m=+1483.473323737" watchObservedRunningTime="2026-02-14 14:16:51.448046126 +0000 UTC m=+1483.474035607" Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.408652 4750 generic.go:334] "Generic (PLEG): container finished" podID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerID="c1618f64fa12d7622caa3d7a0e228fc5e59f5bfcf25d88609f6e6cac8e9f12ce" exitCode=0 Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.408952 4750 generic.go:334] "Generic (PLEG): container finished" podID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerID="62b13efd7901f4ac9dcc84a1738c17073a8b6c67679e6cf87d993877d1089fd7" exitCode=2 Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.408960 4750 generic.go:334] "Generic (PLEG): container finished" podID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerID="9986d757abda1880af027b07a03a7891fe24f5f6a0846c7107babb53daed340b" exitCode=0 Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.408776 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerDied","Data":"c1618f64fa12d7622caa3d7a0e228fc5e59f5bfcf25d88609f6e6cac8e9f12ce"} Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.409022 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerDied","Data":"62b13efd7901f4ac9dcc84a1738c17073a8b6c67679e6cf87d993877d1089fd7"} Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.409051 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerDied","Data":"9986d757abda1880af027b07a03a7891fe24f5f6a0846c7107babb53daed340b"} Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.410475 4750 generic.go:334] "Generic (PLEG): container finished" podID="f6dc6646-1544-4d25-bcc8-3b269f31b74b" containerID="516678059df0b161c311edded47b210243fc0cd95e08650ebdaaaa9803446bec" exitCode=0 Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.410566 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ggdkj" event={"ID":"f6dc6646-1544-4d25-bcc8-3b269f31b74b","Type":"ContainerDied","Data":"516678059df0b161c311edded47b210243fc0cd95e08650ebdaaaa9803446bec"} Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.412047 4750 generic.go:334] "Generic (PLEG): container finished" podID="2955f853-8f28-4231-8d84-ed81bb9c787e" containerID="b04b520d013b92f2ac2310030b1be32ac7e64d535f2c74f061749b2bdf0384ca" exitCode=0 Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.412080 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-5c3d-account-create-update-gc5rf" event={"ID":"2955f853-8f28-4231-8d84-ed81bb9c787e","Type":"ContainerDied","Data":"b04b520d013b92f2ac2310030b1be32ac7e64d535f2c74f061749b2bdf0384ca"} Feb 14 14:16:52 crc kubenswrapper[4750]: I0214 14:16:52.735650 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.212238 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-g6dqm"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.214062 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.216520 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.217174 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.224322 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-g6dqm"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.330444 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-config-data\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.330502 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r629t\" (UniqueName: \"kubernetes.io/projected/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-kube-api-access-r629t\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.330551 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.330579 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-scripts\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.368850 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.421002 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.427359 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.436461 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-config-data\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.436525 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r629t\" (UniqueName: \"kubernetes.io/projected/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-kube-api-access-r629t\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.436578 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.436601 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-scripts\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.511320 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-config-data\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.579850 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.586923 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-scripts\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.596753 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.599584 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r629t\" (UniqueName: \"kubernetes.io/projected/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-kube-api-access-r629t\") pod \"nova-cell0-cell-mapping-g6dqm\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.614317 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.614416 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlcc5\" (UniqueName: \"kubernetes.io/projected/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-kube-api-access-dlcc5\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.614553 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-logs\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.614679 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-config-data\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.636542 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.638825 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.641541 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.714688 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.723285 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbx26\" (UniqueName: \"kubernetes.io/projected/acfe2e91-cbbe-4fd4-a194-697f5240263d-kube-api-access-wbx26\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.723373 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-logs\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.723680 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-config-data\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.723723 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-config-data\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.723749 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.723804 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.723859 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acfe2e91-cbbe-4fd4-a194-697f5240263d-logs\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.723877 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlcc5\" (UniqueName: \"kubernetes.io/projected/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-kube-api-access-dlcc5\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.724664 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-logs\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.729242 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.729466 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-config-data\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.749084 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlcc5\" (UniqueName: \"kubernetes.io/projected/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-kube-api-access-dlcc5\") pod \"nova-api-0\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " pod="openstack/nova-api-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.760422 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.762050 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.764430 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.766550 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.807176 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-74l7d"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.811034 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.833167 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56wk5\" (UniqueName: \"kubernetes.io/projected/ec6a6d64-77b8-486c-aa23-3927ca7a820a-kube-api-access-56wk5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.833214 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-config-data\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.833237 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.833258 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.833322 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acfe2e91-cbbe-4fd4-a194-697f5240263d-logs\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.833367 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbx26\" (UniqueName: \"kubernetes.io/projected/acfe2e91-cbbe-4fd4-a194-697f5240263d-kube-api-access-wbx26\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.833417 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.839093 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-74l7d"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.840695 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acfe2e91-cbbe-4fd4-a194-697f5240263d-logs\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.849829 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.858842 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.877225 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-config-data\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.878746 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbx26\" (UniqueName: \"kubernetes.io/projected/acfe2e91-cbbe-4fd4-a194-697f5240263d-kube-api-access-wbx26\") pod \"nova-metadata-0\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " pod="openstack/nova-metadata-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.882357 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.906444 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.910452 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.934864 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.935002 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.935046 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-svc\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.935066 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-config\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.935091 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.935132 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.937278 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t566s\" (UniqueName: \"kubernetes.io/projected/345928e8-5155-49e5-9159-7f9142ee2dd0-kube-api-access-t566s\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.937308 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.937366 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56wk5\" (UniqueName: \"kubernetes.io/projected/ec6a6d64-77b8-486c-aa23-3927ca7a820a-kube-api-access-56wk5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.946606 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.948359 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.966890 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56wk5\" (UniqueName: \"kubernetes.io/projected/ec6a6d64-77b8-486c-aa23-3927ca7a820a-kube-api-access-56wk5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:53 crc kubenswrapper[4750]: I0214 14:16:53.990924 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.001569 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.027702 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.039331 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8v7\" (UniqueName: \"kubernetes.io/projected/c26487e4-dff4-42b9-8e04-ef40c9b079f4-kube-api-access-7j8v7\") pod \"nova-scheduler-0\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.039408 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.039458 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.039499 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-svc\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.039523 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-config\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.039548 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.039617 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t566s\" (UniqueName: \"kubernetes.io/projected/345928e8-5155-49e5-9159-7f9142ee2dd0-kube-api-access-t566s\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.039635 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.039668 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-config-data\") pod \"nova-scheduler-0\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.040531 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.041074 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-svc\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.041552 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-config\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.042033 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.042800 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.062839 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t566s\" (UniqueName: \"kubernetes.io/projected/345928e8-5155-49e5-9159-7f9142ee2dd0-kube-api-access-t566s\") pod \"dnsmasq-dns-9b86998b5-74l7d\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.142100 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-config-data\") pod \"nova-scheduler-0\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.142490 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j8v7\" (UniqueName: \"kubernetes.io/projected/c26487e4-dff4-42b9-8e04-ef40c9b079f4-kube-api-access-7j8v7\") pod \"nova-scheduler-0\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.142549 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.154760 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-config-data\") pod \"nova-scheduler-0\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.155277 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.161042 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j8v7\" (UniqueName: \"kubernetes.io/projected/c26487e4-dff4-42b9-8e04-ef40c9b079f4-kube-api-access-7j8v7\") pod \"nova-scheduler-0\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.161106 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.192583 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.283785 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.333508 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.461887 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjgng\" (UniqueName: \"kubernetes.io/projected/f6dc6646-1544-4d25-bcc8-3b269f31b74b-kube-api-access-kjgng\") pod \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\" (UID: \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\") " Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.462426 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc6646-1544-4d25-bcc8-3b269f31b74b-operator-scripts\") pod \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\" (UID: \"f6dc6646-1544-4d25-bcc8-3b269f31b74b\") " Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.463563 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6dc6646-1544-4d25-bcc8-3b269f31b74b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6dc6646-1544-4d25-bcc8-3b269f31b74b" (UID: "f6dc6646-1544-4d25-bcc8-3b269f31b74b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.469064 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6dc6646-1544-4d25-bcc8-3b269f31b74b-kube-api-access-kjgng" (OuterVolumeSpecName: "kube-api-access-kjgng") pod "f6dc6646-1544-4d25-bcc8-3b269f31b74b" (UID: "f6dc6646-1544-4d25-bcc8-3b269f31b74b"). InnerVolumeSpecName "kube-api-access-kjgng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.564683 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc6646-1544-4d25-bcc8-3b269f31b74b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.564704 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjgng\" (UniqueName: \"kubernetes.io/projected/f6dc6646-1544-4d25-bcc8-3b269f31b74b-kube-api-access-kjgng\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.662321 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-g6dqm"] Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.674998 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ggdkj" event={"ID":"f6dc6646-1544-4d25-bcc8-3b269f31b74b","Type":"ContainerDied","Data":"9031b5a7eef9d24eac3e577739fac1a8420f1fbf3fddba5875427ac9bae7f9c9"} Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.675038 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9031b5a7eef9d24eac3e577739fac1a8420f1fbf3fddba5875427ac9bae7f9c9" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.675107 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ggdkj" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.680636 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-5c3d-account-create-update-gc5rf" event={"ID":"2955f853-8f28-4231-8d84-ed81bb9c787e","Type":"ContainerDied","Data":"89fb4271f402b1d4c31522a5d13f0f318ea9d9d19047b628a0076ee4eae53da8"} Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.680674 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89fb4271f402b1d4c31522a5d13f0f318ea9d9d19047b628a0076ee4eae53da8" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.729807 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.801696 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.855299 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.870273 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2955f853-8f28-4231-8d84-ed81bb9c787e-operator-scripts\") pod \"2955f853-8f28-4231-8d84-ed81bb9c787e\" (UID: \"2955f853-8f28-4231-8d84-ed81bb9c787e\") " Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.870568 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnw79\" (UniqueName: \"kubernetes.io/projected/2955f853-8f28-4231-8d84-ed81bb9c787e-kube-api-access-qnw79\") pod \"2955f853-8f28-4231-8d84-ed81bb9c787e\" (UID: \"2955f853-8f28-4231-8d84-ed81bb9c787e\") " Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.871219 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2955f853-8f28-4231-8d84-ed81bb9c787e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2955f853-8f28-4231-8d84-ed81bb9c787e" (UID: "2955f853-8f28-4231-8d84-ed81bb9c787e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.871376 4750 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2955f853-8f28-4231-8d84-ed81bb9c787e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.875023 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2955f853-8f28-4231-8d84-ed81bb9c787e-kube-api-access-qnw79" (OuterVolumeSpecName: "kube-api-access-qnw79") pod "2955f853-8f28-4231-8d84-ed81bb9c787e" (UID: "2955f853-8f28-4231-8d84-ed81bb9c787e"). InnerVolumeSpecName "kube-api-access-qnw79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.973862 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnw79\" (UniqueName: \"kubernetes.io/projected/2955f853-8f28-4231-8d84-ed81bb9c787e-kube-api-access-qnw79\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.980320 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:16:54 crc kubenswrapper[4750]: I0214 14:16:54.995375 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.012822 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9bp8d"] Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.196733 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-72fmt"] Feb 14 14:16:55 crc kubenswrapper[4750]: E0214 14:16:55.197399 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6dc6646-1544-4d25-bcc8-3b269f31b74b" containerName="mariadb-database-create" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.197414 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6dc6646-1544-4d25-bcc8-3b269f31b74b" containerName="mariadb-database-create" Feb 14 14:16:55 crc kubenswrapper[4750]: E0214 14:16:55.197433 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2955f853-8f28-4231-8d84-ed81bb9c787e" containerName="mariadb-account-create-update" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.197442 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955f853-8f28-4231-8d84-ed81bb9c787e" containerName="mariadb-account-create-update" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.197757 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6dc6646-1544-4d25-bcc8-3b269f31b74b" containerName="mariadb-database-create" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.197770 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2955f853-8f28-4231-8d84-ed81bb9c787e" containerName="mariadb-account-create-update" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.198594 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.208816 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.209260 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.211903 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.240105 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.293457 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqdv9\" (UniqueName: \"kubernetes.io/projected/866ef186-d207-4fb3-8940-8c27b138480b-kube-api-access-xqdv9\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.293537 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-config-data\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.293735 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.293832 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-scripts\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.312211 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-72fmt"] Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.410551 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqdv9\" (UniqueName: \"kubernetes.io/projected/866ef186-d207-4fb3-8940-8c27b138480b-kube-api-access-xqdv9\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.410612 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-config-data\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.410722 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.410778 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-scripts\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.416945 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-config-data\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.417126 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-scripts\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.420735 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.438818 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqdv9\" (UniqueName: \"kubernetes.io/projected/866ef186-d207-4fb3-8940-8c27b138480b-kube-api-access-xqdv9\") pod \"nova-cell1-conductor-db-sync-72fmt\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.476018 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-74l7d"] Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.543023 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.731371 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e","Type":"ContainerStarted","Data":"92470ff9b5f7ee228e4f270343e36fd85e2d66b89f570b85ef373a0c8f41c4a9"} Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.738852 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"acfe2e91-cbbe-4fd4-a194-697f5240263d","Type":"ContainerStarted","Data":"a7a8cccc5e7bdbf90d2a0ac0819583506f0407c0675d86c050d68993cd02e2b0"} Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.742658 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" event={"ID":"345928e8-5155-49e5-9159-7f9142ee2dd0","Type":"ContainerStarted","Data":"764b0d950da4c4c22a73328bfeac98825fcd874aa1b9950757f3394272f4848d"} Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.750411 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c26487e4-dff4-42b9-8e04-ef40c9b079f4","Type":"ContainerStarted","Data":"c917b0dd8fd53cf4180a2c664069d64402070699276d4ed3b62a3557cbea471d"} Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.759474 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g6dqm" event={"ID":"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f","Type":"ContainerStarted","Data":"52ca3a14b634963ed0acfe0eb967f9160c104a19a3db8df86d62fcd1f81eafeb"} Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.759534 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g6dqm" event={"ID":"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f","Type":"ContainerStarted","Data":"edc95c543bf02800b8cabc09dc7bc825d9e653fe21f6e77bfe61553fc16764cb"} Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.773437 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec6a6d64-77b8-486c-aa23-3927ca7a820a","Type":"ContainerStarted","Data":"be3507d396bc7106faa716e7ac56ae01762c27d77722284eadd4c1b957eee57c"} Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.773587 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-5c3d-account-create-update-gc5rf" Feb 14 14:16:55 crc kubenswrapper[4750]: I0214 14:16:55.803728 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-g6dqm" podStartSLOduration=2.8037030080000003 podStartE2EDuration="2.803703008s" podCreationTimestamp="2026-02-14 14:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:55.795382162 +0000 UTC m=+1487.821371643" watchObservedRunningTime="2026-02-14 14:16:55.803703008 +0000 UTC m=+1487.829692489" Feb 14 14:16:56 crc kubenswrapper[4750]: I0214 14:16:56.178019 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-72fmt"] Feb 14 14:16:56 crc kubenswrapper[4750]: W0214 14:16:56.255033 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod866ef186_d207_4fb3_8940_8c27b138480b.slice/crio-a2feb2816476b9401ce87e7abce4415b96d738ac6166230b8630b17e669564a3 WatchSource:0}: Error finding container a2feb2816476b9401ce87e7abce4415b96d738ac6166230b8630b17e669564a3: Status 404 returned error can't find the container with id a2feb2816476b9401ce87e7abce4415b96d738ac6166230b8630b17e669564a3 Feb 14 14:16:56 crc kubenswrapper[4750]: I0214 14:16:56.801098 4750 generic.go:334] "Generic (PLEG): container finished" podID="345928e8-5155-49e5-9159-7f9142ee2dd0" containerID="22f7f9617c0101e3b17f5426e1b09bc3d7b7fee6499b0873bd94730643a13eda" exitCode=0 Feb 14 14:16:56 crc kubenswrapper[4750]: I0214 14:16:56.801198 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" event={"ID":"345928e8-5155-49e5-9159-7f9142ee2dd0","Type":"ContainerDied","Data":"22f7f9617c0101e3b17f5426e1b09bc3d7b7fee6499b0873bd94730643a13eda"} Feb 14 14:16:56 crc kubenswrapper[4750]: I0214 14:16:56.805382 4750 generic.go:334] "Generic (PLEG): container finished" podID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerID="45a6f95e6656acaddf988afdf64ee79bb28a1cd751acf42d4f0299879c6e05b5" exitCode=0 Feb 14 14:16:56 crc kubenswrapper[4750]: I0214 14:16:56.805463 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerDied","Data":"45a6f95e6656acaddf988afdf64ee79bb28a1cd751acf42d4f0299879c6e05b5"} Feb 14 14:16:56 crc kubenswrapper[4750]: I0214 14:16:56.811433 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9bp8d" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="registry-server" containerID="cri-o://dc4eb70df0864db6c6c5de127a2b2d2ecf0964679372dc96627bb6e89a430442" gracePeriod=2 Feb 14 14:16:56 crc kubenswrapper[4750]: I0214 14:16:56.811810 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-72fmt" event={"ID":"866ef186-d207-4fb3-8940-8c27b138480b","Type":"ContainerStarted","Data":"555f94b71066c3a86bce6b9fe88fdfd9783dee13be10d4a8bd5808b751256080"} Feb 14 14:16:56 crc kubenswrapper[4750]: I0214 14:16:56.811839 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-72fmt" event={"ID":"866ef186-d207-4fb3-8940-8c27b138480b","Type":"ContainerStarted","Data":"a2feb2816476b9401ce87e7abce4415b96d738ac6166230b8630b17e669564a3"} Feb 14 14:16:56 crc kubenswrapper[4750]: I0214 14:16:56.852561 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-72fmt" podStartSLOduration=1.85254287 podStartE2EDuration="1.85254287s" podCreationTimestamp="2026-02-14 14:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:56.849261197 +0000 UTC m=+1488.875250678" watchObservedRunningTime="2026-02-14 14:16:56.85254287 +0000 UTC m=+1488.878532351" Feb 14 14:16:57 crc kubenswrapper[4750]: I0214 14:16:57.410902 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:16:57 crc kubenswrapper[4750]: I0214 14:16:57.429810 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 14:16:57 crc kubenswrapper[4750]: I0214 14:16:57.828131 4750 generic.go:334] "Generic (PLEG): container finished" podID="864aae15-a641-418d-9df3-8d91a914ccfa" containerID="dc4eb70df0864db6c6c5de127a2b2d2ecf0964679372dc96627bb6e89a430442" exitCode=0 Feb 14 14:16:57 crc kubenswrapper[4750]: I0214 14:16:57.829421 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bp8d" event={"ID":"864aae15-a641-418d-9df3-8d91a914ccfa","Type":"ContainerDied","Data":"dc4eb70df0864db6c6c5de127a2b2d2ecf0964679372dc96627bb6e89a430442"} Feb 14 14:16:58 crc kubenswrapper[4750]: I0214 14:16:58.860001 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2096d84a-1034-4cbc-8df6-0d4df0161fc3","Type":"ContainerDied","Data":"2579add2d179d9e1d5b598bda89fd3193daf8250bc8c6cff3b4731ba708e81d8"} Feb 14 14:16:58 crc kubenswrapper[4750]: I0214 14:16:58.860980 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2579add2d179d9e1d5b598bda89fd3193daf8250bc8c6cff3b4731ba708e81d8" Feb 14 14:16:58 crc kubenswrapper[4750]: I0214 14:16:58.867550 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bp8d" event={"ID":"864aae15-a641-418d-9df3-8d91a914ccfa","Type":"ContainerDied","Data":"96f2e0266cf554aea9d0a7728940d6e35c973b44fc2aaca237b44b5e99883ea9"} Feb 14 14:16:58 crc kubenswrapper[4750]: I0214 14:16:58.867745 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f2e0266cf554aea9d0a7728940d6e35c973b44fc2aaca237b44b5e99883ea9" Feb 14 14:16:58 crc kubenswrapper[4750]: I0214 14:16:58.990929 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.004291 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.145893 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-combined-ca-bundle\") pod \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.146238 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcxjn\" (UniqueName: \"kubernetes.io/projected/864aae15-a641-418d-9df3-8d91a914ccfa-kube-api-access-qcxjn\") pod \"864aae15-a641-418d-9df3-8d91a914ccfa\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.146342 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-ceilometer-tls-certs\") pod \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.146504 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-config-data\") pod \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.147215 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvdtj\" (UniqueName: \"kubernetes.io/projected/2096d84a-1034-4cbc-8df6-0d4df0161fc3-kube-api-access-tvdtj\") pod \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.147388 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-run-httpd\") pod \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.147476 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-utilities\") pod \"864aae15-a641-418d-9df3-8d91a914ccfa\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.147597 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-log-httpd\") pod \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.147709 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-sg-core-conf-yaml\") pod \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.147897 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-catalog-content\") pod \"864aae15-a641-418d-9df3-8d91a914ccfa\" (UID: \"864aae15-a641-418d-9df3-8d91a914ccfa\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.147967 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-scripts\") pod \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\" (UID: \"2096d84a-1034-4cbc-8df6-0d4df0161fc3\") " Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.149764 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-utilities" (OuterVolumeSpecName: "utilities") pod "864aae15-a641-418d-9df3-8d91a914ccfa" (UID: "864aae15-a641-418d-9df3-8d91a914ccfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.150414 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2096d84a-1034-4cbc-8df6-0d4df0161fc3" (UID: "2096d84a-1034-4cbc-8df6-0d4df0161fc3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.150882 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2096d84a-1034-4cbc-8df6-0d4df0161fc3" (UID: "2096d84a-1034-4cbc-8df6-0d4df0161fc3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.152889 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-scripts" (OuterVolumeSpecName: "scripts") pod "2096d84a-1034-4cbc-8df6-0d4df0161fc3" (UID: "2096d84a-1034-4cbc-8df6-0d4df0161fc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.200270 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864aae15-a641-418d-9df3-8d91a914ccfa-kube-api-access-qcxjn" (OuterVolumeSpecName: "kube-api-access-qcxjn") pod "864aae15-a641-418d-9df3-8d91a914ccfa" (UID: "864aae15-a641-418d-9df3-8d91a914ccfa"). InnerVolumeSpecName "kube-api-access-qcxjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.217332 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2096d84a-1034-4cbc-8df6-0d4df0161fc3-kube-api-access-tvdtj" (OuterVolumeSpecName: "kube-api-access-tvdtj") pod "2096d84a-1034-4cbc-8df6-0d4df0161fc3" (UID: "2096d84a-1034-4cbc-8df6-0d4df0161fc3"). InnerVolumeSpecName "kube-api-access-tvdtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.238559 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2096d84a-1034-4cbc-8df6-0d4df0161fc3" (UID: "2096d84a-1034-4cbc-8df6-0d4df0161fc3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.252257 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.252287 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcxjn\" (UniqueName: \"kubernetes.io/projected/864aae15-a641-418d-9df3-8d91a914ccfa-kube-api-access-qcxjn\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.252298 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvdtj\" (UniqueName: \"kubernetes.io/projected/2096d84a-1034-4cbc-8df6-0d4df0161fc3-kube-api-access-tvdtj\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.252306 4750 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.252316 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.252325 4750 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2096d84a-1034-4cbc-8df6-0d4df0161fc3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.252333 4750 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.264007 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "864aae15-a641-418d-9df3-8d91a914ccfa" (UID: "864aae15-a641-418d-9df3-8d91a914ccfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.289462 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2096d84a-1034-4cbc-8df6-0d4df0161fc3" (UID: "2096d84a-1034-4cbc-8df6-0d4df0161fc3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.309858 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2096d84a-1034-4cbc-8df6-0d4df0161fc3" (UID: "2096d84a-1034-4cbc-8df6-0d4df0161fc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.354475 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864aae15-a641-418d-9df3-8d91a914ccfa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.354520 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.354535 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.400839 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-config-data" (OuterVolumeSpecName: "config-data") pod "2096d84a-1034-4cbc-8df6-0d4df0161fc3" (UID: "2096d84a-1034-4cbc-8df6-0d4df0161fc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.457517 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096d84a-1034-4cbc-8df6-0d4df0161fc3-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.880911 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e","Type":"ContainerStarted","Data":"5bf2917018da5101577aa64c044e513ce3174121f46bbe584a0d864762f563ac"} Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.880954 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e","Type":"ContainerStarted","Data":"d86076561cbf328745c6f32f0abb482cfbedd251dd8872ccf30c69d77c2f2f97"} Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.882959 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"acfe2e91-cbbe-4fd4-a194-697f5240263d","Type":"ContainerStarted","Data":"8eb9c995aaba8f88bea8e4f07760a7411747e1b92a0f7e403ed56992a693f69d"} Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.883004 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"acfe2e91-cbbe-4fd4-a194-697f5240263d","Type":"ContainerStarted","Data":"c7e458f9a6253ac3969302512df4866f7f3b505bd5f59f6e1b3daebb7174f2b7"} Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.883151 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerName="nova-metadata-log" containerID="cri-o://c7e458f9a6253ac3969302512df4866f7f3b505bd5f59f6e1b3daebb7174f2b7" gracePeriod=30 Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.883463 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerName="nova-metadata-metadata" containerID="cri-o://8eb9c995aaba8f88bea8e4f07760a7411747e1b92a0f7e403ed56992a693f69d" gracePeriod=30 Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.893890 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" event={"ID":"345928e8-5155-49e5-9159-7f9142ee2dd0","Type":"ContainerStarted","Data":"37518088551d5e11c890ab9e91837281bbb8cbeac7b34184961bdb2fb114c93f"} Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.894096 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.896244 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c26487e4-dff4-42b9-8e04-ef40c9b079f4","Type":"ContainerStarted","Data":"ed62e0d8b376940d9fa6f57829b1c080007b61c1cb0d2e626a9f8014714b682d"} Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.899387 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec6a6d64-77b8-486c-aa23-3927ca7a820a","Type":"ContainerStarted","Data":"6dca7009d852793796c0b1875030461e6fd6db3e0e9852f0c8b2701670d6cd2a"} Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.899412 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bp8d" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.899673 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.899752 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ec6a6d64-77b8-486c-aa23-3927ca7a820a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6dca7009d852793796c0b1875030461e6fd6db3e0e9852f0c8b2701670d6cd2a" gracePeriod=30 Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.917664 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.114287468 podStartE2EDuration="6.917645581s" podCreationTimestamp="2026-02-14 14:16:53 +0000 UTC" firstStartedPulling="2026-02-14 14:16:54.960935611 +0000 UTC m=+1486.986925092" lastFinishedPulling="2026-02-14 14:16:58.764293724 +0000 UTC m=+1490.790283205" observedRunningTime="2026-02-14 14:16:59.906861064 +0000 UTC m=+1491.932850545" watchObservedRunningTime="2026-02-14 14:16:59.917645581 +0000 UTC m=+1491.943635062" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.941592 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.370952464 podStartE2EDuration="6.941567062s" podCreationTimestamp="2026-02-14 14:16:53 +0000 UTC" firstStartedPulling="2026-02-14 14:16:55.215555358 +0000 UTC m=+1487.241544839" lastFinishedPulling="2026-02-14 14:16:58.786169956 +0000 UTC m=+1490.812159437" observedRunningTime="2026-02-14 14:16:59.936438196 +0000 UTC m=+1491.962427677" watchObservedRunningTime="2026-02-14 14:16:59.941567062 +0000 UTC m=+1491.967556563" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.971677 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.175536302 podStartE2EDuration="6.971661129s" podCreationTimestamp="2026-02-14 14:16:53 +0000 UTC" firstStartedPulling="2026-02-14 14:16:54.968073134 +0000 UTC m=+1486.994062615" lastFinishedPulling="2026-02-14 14:16:58.764197961 +0000 UTC m=+1490.790187442" observedRunningTime="2026-02-14 14:16:59.951993999 +0000 UTC m=+1491.977983480" watchObservedRunningTime="2026-02-14 14:16:59.971661129 +0000 UTC m=+1491.997650610" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.980158 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" podStartSLOduration=6.9801436 podStartE2EDuration="6.9801436s" podCreationTimestamp="2026-02-14 14:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:16:59.969049334 +0000 UTC m=+1491.995038825" watchObservedRunningTime="2026-02-14 14:16:59.9801436 +0000 UTC m=+1492.006133081" Feb 14 14:16:59 crc kubenswrapper[4750]: I0214 14:16:59.999746 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.434860613 podStartE2EDuration="6.999717247s" podCreationTimestamp="2026-02-14 14:16:53 +0000 UTC" firstStartedPulling="2026-02-14 14:16:55.210929577 +0000 UTC m=+1487.236919058" lastFinishedPulling="2026-02-14 14:16:58.775786201 +0000 UTC m=+1490.801775692" observedRunningTime="2026-02-14 14:16:59.987356055 +0000 UTC m=+1492.013345536" watchObservedRunningTime="2026-02-14 14:16:59.999717247 +0000 UTC m=+1492.025706728" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.033333 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9bp8d"] Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.055499 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9bp8d"] Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.072138 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.083338 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.100193 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:00 crc kubenswrapper[4750]: E0214 14:17:00.100714 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="ceilometer-central-agent" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.100733 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="ceilometer-central-agent" Feb 14 14:17:00 crc kubenswrapper[4750]: E0214 14:17:00.100753 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="proxy-httpd" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.100760 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="proxy-httpd" Feb 14 14:17:00 crc kubenswrapper[4750]: E0214 14:17:00.100781 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="extract-content" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.100790 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="extract-content" Feb 14 14:17:00 crc kubenswrapper[4750]: E0214 14:17:00.100818 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="ceilometer-notification-agent" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.100827 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="ceilometer-notification-agent" Feb 14 14:17:00 crc kubenswrapper[4750]: E0214 14:17:00.100846 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="extract-utilities" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.100852 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="extract-utilities" Feb 14 14:17:00 crc kubenswrapper[4750]: E0214 14:17:00.100868 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="registry-server" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.100873 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="registry-server" Feb 14 14:17:00 crc kubenswrapper[4750]: E0214 14:17:00.100884 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="sg-core" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.100892 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="sg-core" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.101076 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="proxy-httpd" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.101091 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="ceilometer-notification-agent" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.101104 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" containerName="registry-server" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.101206 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="ceilometer-central-agent" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.101224 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" containerName="sg-core" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.120792 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.127432 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.127731 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.127863 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.143455 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.276284 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-config-data\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.276364 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-log-httpd\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.276543 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-scripts\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.276984 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.277249 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.277285 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.277380 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-run-httpd\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.277486 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlx8\" (UniqueName: \"kubernetes.io/projected/8906e470-8b1f-4000-9050-72bdcf751ed6-kube-api-access-fzlx8\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.379884 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzlx8\" (UniqueName: \"kubernetes.io/projected/8906e470-8b1f-4000-9050-72bdcf751ed6-kube-api-access-fzlx8\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.379972 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-config-data\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.380025 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-log-httpd\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.380053 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-scripts\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.380292 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.380433 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.380455 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.380690 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-run-httpd\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.381002 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-run-httpd\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.380925 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-log-httpd\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.389597 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.390095 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-scripts\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.391740 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-config-data\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.394588 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.395430 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.400266 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzlx8\" (UniqueName: \"kubernetes.io/projected/8906e470-8b1f-4000-9050-72bdcf751ed6-kube-api-access-fzlx8\") pod \"ceilometer-0\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.484140 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.578783 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-nlb47"] Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.596999 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.602947 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.603393 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.609632 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.610245 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-ktj55" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.642742 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-nlb47"] Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.699469 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mxzb\" (UniqueName: \"kubernetes.io/projected/233e8b69-480e-4c6b-a127-e0c8b21616ed-kube-api-access-4mxzb\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.699626 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-scripts\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.699700 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-combined-ca-bundle\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.699903 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-config-data\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.769074 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2096d84a-1034-4cbc-8df6-0d4df0161fc3" path="/var/lib/kubelet/pods/2096d84a-1034-4cbc-8df6-0d4df0161fc3/volumes" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.770625 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864aae15-a641-418d-9df3-8d91a914ccfa" path="/var/lib/kubelet/pods/864aae15-a641-418d-9df3-8d91a914ccfa/volumes" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.801732 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mxzb\" (UniqueName: \"kubernetes.io/projected/233e8b69-480e-4c6b-a127-e0c8b21616ed-kube-api-access-4mxzb\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.801982 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-scripts\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.802055 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-combined-ca-bundle\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.802283 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-config-data\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.808150 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-combined-ca-bundle\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.812652 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-scripts\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.816741 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-config-data\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.829719 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mxzb\" (UniqueName: \"kubernetes.io/projected/233e8b69-480e-4c6b-a127-e0c8b21616ed-kube-api-access-4mxzb\") pod \"aodh-db-sync-nlb47\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.932983 4750 generic.go:334] "Generic (PLEG): container finished" podID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerID="c7e458f9a6253ac3969302512df4866f7f3b505bd5f59f6e1b3daebb7174f2b7" exitCode=143 Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.933036 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"acfe2e91-cbbe-4fd4-a194-697f5240263d","Type":"ContainerDied","Data":"c7e458f9a6253ac3969302512df4866f7f3b505bd5f59f6e1b3daebb7174f2b7"} Feb 14 14:17:00 crc kubenswrapper[4750]: I0214 14:17:00.952878 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:01 crc kubenswrapper[4750]: I0214 14:17:01.035340 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:01 crc kubenswrapper[4750]: I0214 14:17:01.519020 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-nlb47"] Feb 14 14:17:01 crc kubenswrapper[4750]: W0214 14:17:01.523727 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod233e8b69_480e_4c6b_a127_e0c8b21616ed.slice/crio-cd6e2052486a3bf68c2099b9fb79da26d04f268441a5fb0779336a6e220d462c WatchSource:0}: Error finding container cd6e2052486a3bf68c2099b9fb79da26d04f268441a5fb0779336a6e220d462c: Status 404 returned error can't find the container with id cd6e2052486a3bf68c2099b9fb79da26d04f268441a5fb0779336a6e220d462c Feb 14 14:17:01 crc kubenswrapper[4750]: I0214 14:17:01.944676 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nlb47" event={"ID":"233e8b69-480e-4c6b-a127-e0c8b21616ed","Type":"ContainerStarted","Data":"cd6e2052486a3bf68c2099b9fb79da26d04f268441a5fb0779336a6e220d462c"} Feb 14 14:17:01 crc kubenswrapper[4750]: I0214 14:17:01.948516 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerStarted","Data":"0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62"} Feb 14 14:17:01 crc kubenswrapper[4750]: I0214 14:17:01.948693 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerStarted","Data":"1fa035a214b402ef39f02b333d9b9e418cf032ec2af03f1b0101ba2176380d3b"} Feb 14 14:17:02 crc kubenswrapper[4750]: I0214 14:17:02.961575 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerStarted","Data":"56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe"} Feb 14 14:17:03 crc kubenswrapper[4750]: I0214 14:17:03.982415 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerStarted","Data":"e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f"} Feb 14 14:17:03 crc kubenswrapper[4750]: I0214 14:17:03.985688 4750 generic.go:334] "Generic (PLEG): container finished" podID="7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f" containerID="52ca3a14b634963ed0acfe0eb967f9160c104a19a3db8df86d62fcd1f81eafeb" exitCode=0 Feb 14 14:17:03 crc kubenswrapper[4750]: I0214 14:17:03.985729 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g6dqm" event={"ID":"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f","Type":"ContainerDied","Data":"52ca3a14b634963ed0acfe0eb967f9160c104a19a3db8df86d62fcd1f81eafeb"} Feb 14 14:17:03 crc kubenswrapper[4750]: I0214 14:17:03.991976 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 14:17:03 crc kubenswrapper[4750]: I0214 14:17:03.992126 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 14:17:04 crc kubenswrapper[4750]: I0214 14:17:04.031306 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 14:17:04 crc kubenswrapper[4750]: I0214 14:17:04.031546 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 14:17:04 crc kubenswrapper[4750]: I0214 14:17:04.161557 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:04 crc kubenswrapper[4750]: I0214 14:17:04.194320 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:17:04 crc kubenswrapper[4750]: I0214 14:17:04.284633 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 14:17:04 crc kubenswrapper[4750]: I0214 14:17:04.285532 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 14:17:04 crc kubenswrapper[4750]: I0214 14:17:04.287638 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-8fdrk"] Feb 14 14:17:04 crc kubenswrapper[4750]: I0214 14:17:04.287883 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" podUID="b2ed0d86-71db-4bad-a5db-e596444d65f6" containerName="dnsmasq-dns" containerID="cri-o://30c209d1b27b37211b0cd6b45b8493f672acfdde0c2d3872d829ad1220b9ea7f" gracePeriod=10 Feb 14 14:17:04 crc kubenswrapper[4750]: I0214 14:17:04.364767 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 14:17:05 crc kubenswrapper[4750]: I0214 14:17:05.000764 4750 generic.go:334] "Generic (PLEG): container finished" podID="b2ed0d86-71db-4bad-a5db-e596444d65f6" containerID="30c209d1b27b37211b0cd6b45b8493f672acfdde0c2d3872d829ad1220b9ea7f" exitCode=0 Feb 14 14:17:05 crc kubenswrapper[4750]: I0214 14:17:05.000906 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" event={"ID":"b2ed0d86-71db-4bad-a5db-e596444d65f6","Type":"ContainerDied","Data":"30c209d1b27b37211b0cd6b45b8493f672acfdde0c2d3872d829ad1220b9ea7f"} Feb 14 14:17:05 crc kubenswrapper[4750]: I0214 14:17:05.003967 4750 generic.go:334] "Generic (PLEG): container finished" podID="866ef186-d207-4fb3-8940-8c27b138480b" containerID="555f94b71066c3a86bce6b9fe88fdfd9783dee13be10d4a8bd5808b751256080" exitCode=0 Feb 14 14:17:05 crc kubenswrapper[4750]: I0214 14:17:05.004168 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-72fmt" event={"ID":"866ef186-d207-4fb3-8940-8c27b138480b","Type":"ContainerDied","Data":"555f94b71066c3a86bce6b9fe88fdfd9783dee13be10d4a8bd5808b751256080"} Feb 14 14:17:05 crc kubenswrapper[4750]: I0214 14:17:05.050492 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 14:17:05 crc kubenswrapper[4750]: I0214 14:17:05.074414 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.244:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 14:17:05 crc kubenswrapper[4750]: I0214 14:17:05.074460 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.244:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 14:17:05 crc kubenswrapper[4750]: I0214 14:17:05.585055 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" podUID="b2ed0d86-71db-4bad-a5db-e596444d65f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.229:5353: connect: connection refused" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.241197 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.253759 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.400750 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-combined-ca-bundle\") pod \"866ef186-d207-4fb3-8940-8c27b138480b\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.401023 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-scripts\") pod \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.401134 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r629t\" (UniqueName: \"kubernetes.io/projected/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-kube-api-access-r629t\") pod \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.401335 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqdv9\" (UniqueName: \"kubernetes.io/projected/866ef186-d207-4fb3-8940-8c27b138480b-kube-api-access-xqdv9\") pod \"866ef186-d207-4fb3-8940-8c27b138480b\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.401442 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-combined-ca-bundle\") pod \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.401499 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-config-data\") pod \"866ef186-d207-4fb3-8940-8c27b138480b\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.401535 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-config-data\") pod \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\" (UID: \"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f\") " Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.401654 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-scripts\") pod \"866ef186-d207-4fb3-8940-8c27b138480b\" (UID: \"866ef186-d207-4fb3-8940-8c27b138480b\") " Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.409999 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-scripts" (OuterVolumeSpecName: "scripts") pod "7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f" (UID: "7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.413370 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866ef186-d207-4fb3-8940-8c27b138480b-kube-api-access-xqdv9" (OuterVolumeSpecName: "kube-api-access-xqdv9") pod "866ef186-d207-4fb3-8940-8c27b138480b" (UID: "866ef186-d207-4fb3-8940-8c27b138480b"). InnerVolumeSpecName "kube-api-access-xqdv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.415479 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-scripts" (OuterVolumeSpecName: "scripts") pod "866ef186-d207-4fb3-8940-8c27b138480b" (UID: "866ef186-d207-4fb3-8940-8c27b138480b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.452207 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-kube-api-access-r629t" (OuterVolumeSpecName: "kube-api-access-r629t") pod "7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f" (UID: "7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f"). InnerVolumeSpecName "kube-api-access-r629t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.489063 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-config-data" (OuterVolumeSpecName: "config-data") pod "7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f" (UID: "7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.491545 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "866ef186-d207-4fb3-8940-8c27b138480b" (UID: "866ef186-d207-4fb3-8940-8c27b138480b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.503099 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f" (UID: "7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.504518 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.504567 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r629t\" (UniqueName: \"kubernetes.io/projected/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-kube-api-access-r629t\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.504581 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqdv9\" (UniqueName: \"kubernetes.io/projected/866ef186-d207-4fb3-8940-8c27b138480b-kube-api-access-xqdv9\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.504590 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.504598 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.504605 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.504613 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.539291 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-config-data" (OuterVolumeSpecName: "config-data") pod "866ef186-d207-4fb3-8940-8c27b138480b" (UID: "866ef186-d207-4fb3-8940-8c27b138480b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.607314 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866ef186-d207-4fb3-8940-8c27b138480b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:07 crc kubenswrapper[4750]: I0214 14:17:07.908785 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.014590 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-config\") pod \"b2ed0d86-71db-4bad-a5db-e596444d65f6\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.014728 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-svc\") pod \"b2ed0d86-71db-4bad-a5db-e596444d65f6\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.014894 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmtxg\" (UniqueName: \"kubernetes.io/projected/b2ed0d86-71db-4bad-a5db-e596444d65f6-kube-api-access-lmtxg\") pod \"b2ed0d86-71db-4bad-a5db-e596444d65f6\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.014970 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-sb\") pod \"b2ed0d86-71db-4bad-a5db-e596444d65f6\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.015013 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-swift-storage-0\") pod \"b2ed0d86-71db-4bad-a5db-e596444d65f6\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.015028 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-nb\") pod \"b2ed0d86-71db-4bad-a5db-e596444d65f6\" (UID: \"b2ed0d86-71db-4bad-a5db-e596444d65f6\") " Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.028295 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ed0d86-71db-4bad-a5db-e596444d65f6-kube-api-access-lmtxg" (OuterVolumeSpecName: "kube-api-access-lmtxg") pod "b2ed0d86-71db-4bad-a5db-e596444d65f6" (UID: "b2ed0d86-71db-4bad-a5db-e596444d65f6"). InnerVolumeSpecName "kube-api-access-lmtxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.045805 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" event={"ID":"b2ed0d86-71db-4bad-a5db-e596444d65f6","Type":"ContainerDied","Data":"3e37ca3a66ce60072db55ed5f910089011cf3546d38a5669164dbf5b7bc7944b"} Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.045855 4750 scope.go:117] "RemoveContainer" containerID="30c209d1b27b37211b0cd6b45b8493f672acfdde0c2d3872d829ad1220b9ea7f" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.045980 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-8fdrk" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.050539 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-72fmt" event={"ID":"866ef186-d207-4fb3-8940-8c27b138480b","Type":"ContainerDied","Data":"a2feb2816476b9401ce87e7abce4415b96d738ac6166230b8630b17e669564a3"} Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.050604 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2feb2816476b9401ce87e7abce4415b96d738ac6166230b8630b17e669564a3" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.050739 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-72fmt" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.053385 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g6dqm" event={"ID":"7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f","Type":"ContainerDied","Data":"edc95c543bf02800b8cabc09dc7bc825d9e653fe21f6e77bfe61553fc16764cb"} Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.053420 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edc95c543bf02800b8cabc09dc7bc825d9e653fe21f6e77bfe61553fc16764cb" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.053483 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g6dqm" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.073179 4750 scope.go:117] "RemoveContainer" containerID="42139d7feee7fc43603e45a1ed86961e488202069bf483d00431d3caa6f156aa" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.117911 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmtxg\" (UniqueName: \"kubernetes.io/projected/b2ed0d86-71db-4bad-a5db-e596444d65f6-kube-api-access-lmtxg\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.122967 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-config" (OuterVolumeSpecName: "config") pod "b2ed0d86-71db-4bad-a5db-e596444d65f6" (UID: "b2ed0d86-71db-4bad-a5db-e596444d65f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.130478 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2ed0d86-71db-4bad-a5db-e596444d65f6" (UID: "b2ed0d86-71db-4bad-a5db-e596444d65f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.146510 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b2ed0d86-71db-4bad-a5db-e596444d65f6" (UID: "b2ed0d86-71db-4bad-a5db-e596444d65f6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.147776 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b2ed0d86-71db-4bad-a5db-e596444d65f6" (UID: "b2ed0d86-71db-4bad-a5db-e596444d65f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.154651 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b2ed0d86-71db-4bad-a5db-e596444d65f6" (UID: "b2ed0d86-71db-4bad-a5db-e596444d65f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.219798 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.219834 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.219846 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.219856 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.219865 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ed0d86-71db-4bad-a5db-e596444d65f6-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.383625 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 14:17:08 crc kubenswrapper[4750]: E0214 14:17:08.384511 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ed0d86-71db-4bad-a5db-e596444d65f6" containerName="init" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.384526 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ed0d86-71db-4bad-a5db-e596444d65f6" containerName="init" Feb 14 14:17:08 crc kubenswrapper[4750]: E0214 14:17:08.384547 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ed0d86-71db-4bad-a5db-e596444d65f6" containerName="dnsmasq-dns" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.384554 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ed0d86-71db-4bad-a5db-e596444d65f6" containerName="dnsmasq-dns" Feb 14 14:17:08 crc kubenswrapper[4750]: E0214 14:17:08.384572 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866ef186-d207-4fb3-8940-8c27b138480b" containerName="nova-cell1-conductor-db-sync" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.384578 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="866ef186-d207-4fb3-8940-8c27b138480b" containerName="nova-cell1-conductor-db-sync" Feb 14 14:17:08 crc kubenswrapper[4750]: E0214 14:17:08.384602 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f" containerName="nova-manage" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.384609 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f" containerName="nova-manage" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.384853 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f" containerName="nova-manage" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.384869 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ed0d86-71db-4bad-a5db-e596444d65f6" containerName="dnsmasq-dns" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.384890 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="866ef186-d207-4fb3-8940-8c27b138480b" containerName="nova-cell1-conductor-db-sync" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.385892 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.391289 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.405594 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-8fdrk"] Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.424247 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-8fdrk"] Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.433867 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.476991 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.477226 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c26487e4-dff4-42b9-8e04-ef40c9b079f4" containerName="nova-scheduler-scheduler" containerID="cri-o://ed62e0d8b376940d9fa6f57829b1c080007b61c1cb0d2e626a9f8014714b682d" gracePeriod=30 Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.500981 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.501261 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-log" containerID="cri-o://d86076561cbf328745c6f32f0abb482cfbedd251dd8872ccf30c69d77c2f2f97" gracePeriod=30 Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.501849 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-api" containerID="cri-o://5bf2917018da5101577aa64c044e513ce3174121f46bbe584a0d864762f563ac" gracePeriod=30 Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.528315 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcv69\" (UniqueName: \"kubernetes.io/projected/8e8a5550-ea14-49fa-ae9d-b38d05a23254-kube-api-access-bcv69\") pod \"nova-cell1-conductor-0\" (UID: \"8e8a5550-ea14-49fa-ae9d-b38d05a23254\") " pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.528385 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8a5550-ea14-49fa-ae9d-b38d05a23254-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e8a5550-ea14-49fa-ae9d-b38d05a23254\") " pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.528683 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8a5550-ea14-49fa-ae9d-b38d05a23254-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e8a5550-ea14-49fa-ae9d-b38d05a23254\") " pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.631571 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcv69\" (UniqueName: \"kubernetes.io/projected/8e8a5550-ea14-49fa-ae9d-b38d05a23254-kube-api-access-bcv69\") pod \"nova-cell1-conductor-0\" (UID: \"8e8a5550-ea14-49fa-ae9d-b38d05a23254\") " pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.631658 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8a5550-ea14-49fa-ae9d-b38d05a23254-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e8a5550-ea14-49fa-ae9d-b38d05a23254\") " pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.631726 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8a5550-ea14-49fa-ae9d-b38d05a23254-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e8a5550-ea14-49fa-ae9d-b38d05a23254\") " pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.637784 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8a5550-ea14-49fa-ae9d-b38d05a23254-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e8a5550-ea14-49fa-ae9d-b38d05a23254\") " pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.637796 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8a5550-ea14-49fa-ae9d-b38d05a23254-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e8a5550-ea14-49fa-ae9d-b38d05a23254\") " pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.656950 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcv69\" (UniqueName: \"kubernetes.io/projected/8e8a5550-ea14-49fa-ae9d-b38d05a23254-kube-api-access-bcv69\") pod \"nova-cell1-conductor-0\" (UID: \"8e8a5550-ea14-49fa-ae9d-b38d05a23254\") " pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.717678 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:08 crc kubenswrapper[4750]: I0214 14:17:08.766188 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ed0d86-71db-4bad-a5db-e596444d65f6" path="/var/lib/kubelet/pods/b2ed0d86-71db-4bad-a5db-e596444d65f6/volumes" Feb 14 14:17:09 crc kubenswrapper[4750]: I0214 14:17:09.077176 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerStarted","Data":"0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86"} Feb 14 14:17:09 crc kubenswrapper[4750]: I0214 14:17:09.077830 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 14:17:09 crc kubenswrapper[4750]: I0214 14:17:09.079758 4750 generic.go:334] "Generic (PLEG): container finished" podID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerID="d86076561cbf328745c6f32f0abb482cfbedd251dd8872ccf30c69d77c2f2f97" exitCode=143 Feb 14 14:17:09 crc kubenswrapper[4750]: I0214 14:17:09.079802 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e","Type":"ContainerDied","Data":"d86076561cbf328745c6f32f0abb482cfbedd251dd8872ccf30c69d77c2f2f97"} Feb 14 14:17:09 crc kubenswrapper[4750]: I0214 14:17:09.081379 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nlb47" event={"ID":"233e8b69-480e-4c6b-a127-e0c8b21616ed","Type":"ContainerStarted","Data":"52d06e7fea7a3a0c977e6fb59c1384d6ac07e2ec1ac41ffa0b5f53ded75f9aa8"} Feb 14 14:17:09 crc kubenswrapper[4750]: I0214 14:17:09.109065 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.051750944 podStartE2EDuration="9.10904571s" podCreationTimestamp="2026-02-14 14:17:00 +0000 UTC" firstStartedPulling="2026-02-14 14:17:01.047127438 +0000 UTC m=+1493.073116909" lastFinishedPulling="2026-02-14 14:17:07.104422194 +0000 UTC m=+1499.130411675" observedRunningTime="2026-02-14 14:17:09.098560781 +0000 UTC m=+1501.124550282" watchObservedRunningTime="2026-02-14 14:17:09.10904571 +0000 UTC m=+1501.135035191" Feb 14 14:17:09 crc kubenswrapper[4750]: I0214 14:17:09.131751 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-nlb47" podStartSLOduration=2.683973737 podStartE2EDuration="9.131730615s" podCreationTimestamp="2026-02-14 14:17:00 +0000 UTC" firstStartedPulling="2026-02-14 14:17:01.52606386 +0000 UTC m=+1493.552053341" lastFinishedPulling="2026-02-14 14:17:07.973820728 +0000 UTC m=+1499.999810219" observedRunningTime="2026-02-14 14:17:09.126433705 +0000 UTC m=+1501.152423176" watchObservedRunningTime="2026-02-14 14:17:09.131730615 +0000 UTC m=+1501.157720096" Feb 14 14:17:09 crc kubenswrapper[4750]: I0214 14:17:09.245073 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 14:17:09 crc kubenswrapper[4750]: W0214 14:17:09.250153 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8a5550_ea14_49fa_ae9d_b38d05a23254.slice/crio-e9fa9d4b46b540c5d1d0ef6870d5df912bcc6cdf02b2fe04d1add7605552a0ea WatchSource:0}: Error finding container e9fa9d4b46b540c5d1d0ef6870d5df912bcc6cdf02b2fe04d1add7605552a0ea: Status 404 returned error can't find the container with id e9fa9d4b46b540c5d1d0ef6870d5df912bcc6cdf02b2fe04d1add7605552a0ea Feb 14 14:17:09 crc kubenswrapper[4750]: E0214 14:17:09.293413 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed62e0d8b376940d9fa6f57829b1c080007b61c1cb0d2e626a9f8014714b682d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 14:17:09 crc kubenswrapper[4750]: E0214 14:17:09.303258 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed62e0d8b376940d9fa6f57829b1c080007b61c1cb0d2e626a9f8014714b682d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 14:17:09 crc kubenswrapper[4750]: E0214 14:17:09.314963 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed62e0d8b376940d9fa6f57829b1c080007b61c1cb0d2e626a9f8014714b682d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 14:17:09 crc kubenswrapper[4750]: E0214 14:17:09.315062 4750 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c26487e4-dff4-42b9-8e04-ef40c9b079f4" containerName="nova-scheduler-scheduler" Feb 14 14:17:10 crc kubenswrapper[4750]: I0214 14:17:10.096793 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e8a5550-ea14-49fa-ae9d-b38d05a23254","Type":"ContainerStarted","Data":"a7e12ebb4dbc70d1c61fe5dc19491f1cc05c2ec69ee341cb29e7faece0034357"} Feb 14 14:17:10 crc kubenswrapper[4750]: I0214 14:17:10.097210 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e8a5550-ea14-49fa-ae9d-b38d05a23254","Type":"ContainerStarted","Data":"e9fa9d4b46b540c5d1d0ef6870d5df912bcc6cdf02b2fe04d1add7605552a0ea"} Feb 14 14:17:10 crc kubenswrapper[4750]: I0214 14:17:10.141624 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.141592269 podStartE2EDuration="2.141592269s" podCreationTimestamp="2026-02-14 14:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:10.118920493 +0000 UTC m=+1502.144909984" watchObservedRunningTime="2026-02-14 14:17:10.141592269 +0000 UTC m=+1502.167581790" Feb 14 14:17:11 crc kubenswrapper[4750]: I0214 14:17:11.107174 4750 generic.go:334] "Generic (PLEG): container finished" podID="233e8b69-480e-4c6b-a127-e0c8b21616ed" containerID="52d06e7fea7a3a0c977e6fb59c1384d6ac07e2ec1ac41ffa0b5f53ded75f9aa8" exitCode=0 Feb 14 14:17:11 crc kubenswrapper[4750]: I0214 14:17:11.107270 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nlb47" event={"ID":"233e8b69-480e-4c6b-a127-e0c8b21616ed","Type":"ContainerDied","Data":"52d06e7fea7a3a0c977e6fb59c1384d6ac07e2ec1ac41ffa0b5f53ded75f9aa8"} Feb 14 14:17:11 crc kubenswrapper[4750]: I0214 14:17:11.107464 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.124464 4750 generic.go:334] "Generic (PLEG): container finished" podID="c26487e4-dff4-42b9-8e04-ef40c9b079f4" containerID="ed62e0d8b376940d9fa6f57829b1c080007b61c1cb0d2e626a9f8014714b682d" exitCode=0 Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.124549 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c26487e4-dff4-42b9-8e04-ef40c9b079f4","Type":"ContainerDied","Data":"ed62e0d8b376940d9fa6f57829b1c080007b61c1cb0d2e626a9f8014714b682d"} Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.126797 4750 generic.go:334] "Generic (PLEG): container finished" podID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerID="5bf2917018da5101577aa64c044e513ce3174121f46bbe584a0d864762f563ac" exitCode=0 Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.126880 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e","Type":"ContainerDied","Data":"5bf2917018da5101577aa64c044e513ce3174121f46bbe584a0d864762f563ac"} Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.459124 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.534700 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-combined-ca-bundle\") pod \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.534761 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j8v7\" (UniqueName: \"kubernetes.io/projected/c26487e4-dff4-42b9-8e04-ef40c9b079f4-kube-api-access-7j8v7\") pod \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.534833 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-config-data\") pod \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\" (UID: \"c26487e4-dff4-42b9-8e04-ef40c9b079f4\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.540395 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26487e4-dff4-42b9-8e04-ef40c9b079f4-kube-api-access-7j8v7" (OuterVolumeSpecName: "kube-api-access-7j8v7") pod "c26487e4-dff4-42b9-8e04-ef40c9b079f4" (UID: "c26487e4-dff4-42b9-8e04-ef40c9b079f4"). InnerVolumeSpecName "kube-api-access-7j8v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.571235 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-config-data" (OuterVolumeSpecName: "config-data") pod "c26487e4-dff4-42b9-8e04-ef40c9b079f4" (UID: "c26487e4-dff4-42b9-8e04-ef40c9b079f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.607803 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.609891 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c26487e4-dff4-42b9-8e04-ef40c9b079f4" (UID: "c26487e4-dff4-42b9-8e04-ef40c9b079f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.641774 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-combined-ca-bundle\") pod \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.642164 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlcc5\" (UniqueName: \"kubernetes.io/projected/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-kube-api-access-dlcc5\") pod \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.642478 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-logs\") pod \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.642958 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-logs" (OuterVolumeSpecName: "logs") pod "3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" (UID: "3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.644792 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-kube-api-access-dlcc5" (OuterVolumeSpecName: "kube-api-access-dlcc5") pod "3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" (UID: "3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e"). InnerVolumeSpecName "kube-api-access-dlcc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.646028 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-config-data\") pod \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\" (UID: \"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.649135 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.649172 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlcc5\" (UniqueName: \"kubernetes.io/projected/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-kube-api-access-dlcc5\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.649189 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j8v7\" (UniqueName: \"kubernetes.io/projected/c26487e4-dff4-42b9-8e04-ef40c9b079f4-kube-api-access-7j8v7\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.649202 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26487e4-dff4-42b9-8e04-ef40c9b079f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.649215 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.674022 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.675527 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" (UID: "3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.677162 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-config-data" (OuterVolumeSpecName: "config-data") pod "3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" (UID: "3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.749699 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-config-data\") pod \"233e8b69-480e-4c6b-a127-e0c8b21616ed\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.749887 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-scripts\") pod \"233e8b69-480e-4c6b-a127-e0c8b21616ed\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.750424 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mxzb\" (UniqueName: \"kubernetes.io/projected/233e8b69-480e-4c6b-a127-e0c8b21616ed-kube-api-access-4mxzb\") pod \"233e8b69-480e-4c6b-a127-e0c8b21616ed\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.750580 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-combined-ca-bundle\") pod \"233e8b69-480e-4c6b-a127-e0c8b21616ed\" (UID: \"233e8b69-480e-4c6b-a127-e0c8b21616ed\") " Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.751189 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.751206 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.752501 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-scripts" (OuterVolumeSpecName: "scripts") pod "233e8b69-480e-4c6b-a127-e0c8b21616ed" (UID: "233e8b69-480e-4c6b-a127-e0c8b21616ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.754051 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/233e8b69-480e-4c6b-a127-e0c8b21616ed-kube-api-access-4mxzb" (OuterVolumeSpecName: "kube-api-access-4mxzb") pod "233e8b69-480e-4c6b-a127-e0c8b21616ed" (UID: "233e8b69-480e-4c6b-a127-e0c8b21616ed"). InnerVolumeSpecName "kube-api-access-4mxzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.784886 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "233e8b69-480e-4c6b-a127-e0c8b21616ed" (UID: "233e8b69-480e-4c6b-a127-e0c8b21616ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.809406 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-config-data" (OuterVolumeSpecName: "config-data") pod "233e8b69-480e-4c6b-a127-e0c8b21616ed" (UID: "233e8b69-480e-4c6b-a127-e0c8b21616ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.855346 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.855389 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.855401 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mxzb\" (UniqueName: \"kubernetes.io/projected/233e8b69-480e-4c6b-a127-e0c8b21616ed-kube-api-access-4mxzb\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:12 crc kubenswrapper[4750]: I0214 14:17:12.855411 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233e8b69-480e-4c6b-a127-e0c8b21616ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.144759 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e","Type":"ContainerDied","Data":"92470ff9b5f7ee228e4f270343e36fd85e2d66b89f570b85ef373a0c8f41c4a9"} Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.144845 4750 scope.go:117] "RemoveContainer" containerID="5bf2917018da5101577aa64c044e513ce3174121f46bbe584a0d864762f563ac" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.144775 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.147771 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nlb47" event={"ID":"233e8b69-480e-4c6b-a127-e0c8b21616ed","Type":"ContainerDied","Data":"cd6e2052486a3bf68c2099b9fb79da26d04f268441a5fb0779336a6e220d462c"} Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.147807 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd6e2052486a3bf68c2099b9fb79da26d04f268441a5fb0779336a6e220d462c" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.147867 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nlb47" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.158348 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c26487e4-dff4-42b9-8e04-ef40c9b079f4","Type":"ContainerDied","Data":"c917b0dd8fd53cf4180a2c664069d64402070699276d4ed3b62a3557cbea471d"} Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.158883 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.188719 4750 scope.go:117] "RemoveContainer" containerID="d86076561cbf328745c6f32f0abb482cfbedd251dd8872ccf30c69d77c2f2f97" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.209734 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.222573 4750 scope.go:117] "RemoveContainer" containerID="ed62e0d8b376940d9fa6f57829b1c080007b61c1cb0d2e626a9f8014714b682d" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.272102 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.285077 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.296940 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.306361 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:13 crc kubenswrapper[4750]: E0214 14:17:13.306900 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26487e4-dff4-42b9-8e04-ef40c9b079f4" containerName="nova-scheduler-scheduler" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.306923 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26487e4-dff4-42b9-8e04-ef40c9b079f4" containerName="nova-scheduler-scheduler" Feb 14 14:17:13 crc kubenswrapper[4750]: E0214 14:17:13.306942 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-api" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.306948 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-api" Feb 14 14:17:13 crc kubenswrapper[4750]: E0214 14:17:13.306967 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233e8b69-480e-4c6b-a127-e0c8b21616ed" containerName="aodh-db-sync" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.306973 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="233e8b69-480e-4c6b-a127-e0c8b21616ed" containerName="aodh-db-sync" Feb 14 14:17:13 crc kubenswrapper[4750]: E0214 14:17:13.306995 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-log" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.307002 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-log" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.307258 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="233e8b69-480e-4c6b-a127-e0c8b21616ed" containerName="aodh-db-sync" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.307272 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-api" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.307282 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" containerName="nova-api-log" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.307290 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26487e4-dff4-42b9-8e04-ef40c9b079f4" containerName="nova-scheduler-scheduler" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.308668 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.310822 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.318289 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.327074 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.329196 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.335776 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.336523 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.384907 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-logs\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.388126 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.388370 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-config-data\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.388554 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-778k6\" (UniqueName: \"kubernetes.io/projected/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-kube-api-access-778k6\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.388666 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.388741 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-config-data\") pod \"nova-scheduler-0\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.388849 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4cfr\" (UniqueName: \"kubernetes.io/projected/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-kube-api-access-z4cfr\") pod \"nova-scheduler-0\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.491062 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.491139 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-config-data\") pod \"nova-scheduler-0\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.491175 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4cfr\" (UniqueName: \"kubernetes.io/projected/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-kube-api-access-z4cfr\") pod \"nova-scheduler-0\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.491257 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-logs\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.491290 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.491374 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-config-data\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.491443 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-778k6\" (UniqueName: \"kubernetes.io/projected/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-kube-api-access-778k6\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.492526 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-logs\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.499955 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-config-data\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.500022 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-config-data\") pod \"nova-scheduler-0\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.500431 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.506735 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.508644 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4cfr\" (UniqueName: \"kubernetes.io/projected/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-kube-api-access-z4cfr\") pod \"nova-scheduler-0\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.518374 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-778k6\" (UniqueName: \"kubernetes.io/projected/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-kube-api-access-778k6\") pod \"nova-api-0\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.638687 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:13 crc kubenswrapper[4750]: I0214 14:17:13.653223 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:17:14 crc kubenswrapper[4750]: I0214 14:17:14.187342 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:14 crc kubenswrapper[4750]: I0214 14:17:14.209006 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:14 crc kubenswrapper[4750]: I0214 14:17:14.757877 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e" path="/var/lib/kubelet/pods/3bbd8fe7-884c-4c65-b73c-b94b4d5fb06e/volumes" Feb 14 14:17:14 crc kubenswrapper[4750]: I0214 14:17:14.759270 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26487e4-dff4-42b9-8e04-ef40c9b079f4" path="/var/lib/kubelet/pods/c26487e4-dff4-42b9-8e04-ef40c9b079f4/volumes" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.193301 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e65daf92-2b4a-4b12-9cef-8b8267dc13cf","Type":"ContainerStarted","Data":"9c09008851f386a37fbf7ca0119cdf4ad61024dfee3600e57a1e0619609979fc"} Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.193355 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e65daf92-2b4a-4b12-9cef-8b8267dc13cf","Type":"ContainerStarted","Data":"faf2066662005fc0b096e46726c0c9001ea538c0bcb30924aed86e8ddcba769d"} Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.193365 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e65daf92-2b4a-4b12-9cef-8b8267dc13cf","Type":"ContainerStarted","Data":"cbd1d611e1bc6f4c561c1a7f1fa897b6698373110891191d69262b3de21f87c5"} Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.195206 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50","Type":"ContainerStarted","Data":"0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0"} Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.195250 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50","Type":"ContainerStarted","Data":"cc17e6fa708cdb5e3936f6f24760043f3274a52f4ffcf455c9d903097472b71f"} Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.212694 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.2126712729999998 podStartE2EDuration="2.212671273s" podCreationTimestamp="2026-02-14 14:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:15.210443519 +0000 UTC m=+1507.236433010" watchObservedRunningTime="2026-02-14 14:17:15.212671273 +0000 UTC m=+1507.238660774" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.247427 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.247402191 podStartE2EDuration="2.247402191s" podCreationTimestamp="2026-02-14 14:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:15.234892745 +0000 UTC m=+1507.260882226" watchObservedRunningTime="2026-02-14 14:17:15.247402191 +0000 UTC m=+1507.273391682" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.690194 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.693941 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.704103 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.716340 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.716564 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.717215 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-ktj55" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.748933 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.749102 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-scripts\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.749435 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pg8n\" (UniqueName: \"kubernetes.io/projected/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-kube-api-access-5pg8n\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.749472 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-config-data\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.851896 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pg8n\" (UniqueName: \"kubernetes.io/projected/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-kube-api-access-5pg8n\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.851936 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-config-data\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.852012 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.852098 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-scripts\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.858456 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.860225 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-config-data\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.872511 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-scripts\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:15 crc kubenswrapper[4750]: I0214 14:17:15.873175 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pg8n\" (UniqueName: \"kubernetes.io/projected/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-kube-api-access-5pg8n\") pod \"aodh-0\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " pod="openstack/aodh-0" Feb 14 14:17:16 crc kubenswrapper[4750]: I0214 14:17:16.036758 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:17:16 crc kubenswrapper[4750]: W0214 14:17:16.516361 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29ea84b0_8bd9_44e7_b6e5_3c1f026c55c1.slice/crio-90d46724bc5606130b11a80809415823ef8acc22c9b5c8032e0042d9420ebd49 WatchSource:0}: Error finding container 90d46724bc5606130b11a80809415823ef8acc22c9b5c8032e0042d9420ebd49: Status 404 returned error can't find the container with id 90d46724bc5606130b11a80809415823ef8acc22c9b5c8032e0042d9420ebd49 Feb 14 14:17:16 crc kubenswrapper[4750]: I0214 14:17:16.518632 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 14:17:17 crc kubenswrapper[4750]: I0214 14:17:17.218898 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerStarted","Data":"90d46724bc5606130b11a80809415823ef8acc22c9b5c8032e0042d9420ebd49"} Feb 14 14:17:18 crc kubenswrapper[4750]: I0214 14:17:18.236087 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerStarted","Data":"cf3e810d8fa5a1a38cb5b3f63b4002c1e8c601c544c32044ae266b107ac5e44c"} Feb 14 14:17:18 crc kubenswrapper[4750]: I0214 14:17:18.551317 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:18 crc kubenswrapper[4750]: I0214 14:17:18.551585 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="ceilometer-central-agent" containerID="cri-o://0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62" gracePeriod=30 Feb 14 14:17:18 crc kubenswrapper[4750]: I0214 14:17:18.552383 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="proxy-httpd" containerID="cri-o://0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86" gracePeriod=30 Feb 14 14:17:18 crc kubenswrapper[4750]: I0214 14:17:18.552438 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="sg-core" containerID="cri-o://e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f" gracePeriod=30 Feb 14 14:17:18 crc kubenswrapper[4750]: I0214 14:17:18.552471 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="ceilometer-notification-agent" containerID="cri-o://56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe" gracePeriod=30 Feb 14 14:17:18 crc kubenswrapper[4750]: I0214 14:17:18.654277 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 14:17:18 crc kubenswrapper[4750]: I0214 14:17:18.657068 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.250:3000/\": read tcp 10.217.0.2:50070->10.217.0.250:3000: read: connection reset by peer" Feb 14 14:17:18 crc kubenswrapper[4750]: I0214 14:17:18.758274 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 14 14:17:19 crc kubenswrapper[4750]: I0214 14:17:19.170596 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 14:17:19 crc kubenswrapper[4750]: I0214 14:17:19.249488 4750 generic.go:334] "Generic (PLEG): container finished" podID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerID="0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86" exitCode=0 Feb 14 14:17:19 crc kubenswrapper[4750]: I0214 14:17:19.249521 4750 generic.go:334] "Generic (PLEG): container finished" podID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerID="e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f" exitCode=2 Feb 14 14:17:19 crc kubenswrapper[4750]: I0214 14:17:19.249532 4750 generic.go:334] "Generic (PLEG): container finished" podID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerID="0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62" exitCode=0 Feb 14 14:17:19 crc kubenswrapper[4750]: I0214 14:17:19.249573 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerDied","Data":"0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86"} Feb 14 14:17:19 crc kubenswrapper[4750]: I0214 14:17:19.249609 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerDied","Data":"e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f"} Feb 14 14:17:19 crc kubenswrapper[4750]: I0214 14:17:19.249620 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerDied","Data":"0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62"} Feb 14 14:17:19 crc kubenswrapper[4750]: I0214 14:17:19.251659 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerStarted","Data":"b406232ebd5290455415a98e22ff0b8fcc1a078f43da3959cb565ada3963f63a"} Feb 14 14:17:19 crc kubenswrapper[4750]: I0214 14:17:19.926383 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.060066 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-sg-core-conf-yaml\") pod \"8906e470-8b1f-4000-9050-72bdcf751ed6\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.060167 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-config-data\") pod \"8906e470-8b1f-4000-9050-72bdcf751ed6\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.060227 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-ceilometer-tls-certs\") pod \"8906e470-8b1f-4000-9050-72bdcf751ed6\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.060323 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-log-httpd\") pod \"8906e470-8b1f-4000-9050-72bdcf751ed6\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.060357 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-combined-ca-bundle\") pod \"8906e470-8b1f-4000-9050-72bdcf751ed6\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.060413 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-scripts\") pod \"8906e470-8b1f-4000-9050-72bdcf751ed6\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.060468 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-run-httpd\") pod \"8906e470-8b1f-4000-9050-72bdcf751ed6\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.069379 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzlx8\" (UniqueName: \"kubernetes.io/projected/8906e470-8b1f-4000-9050-72bdcf751ed6-kube-api-access-fzlx8\") pod \"8906e470-8b1f-4000-9050-72bdcf751ed6\" (UID: \"8906e470-8b1f-4000-9050-72bdcf751ed6\") " Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.071927 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8906e470-8b1f-4000-9050-72bdcf751ed6" (UID: "8906e470-8b1f-4000-9050-72bdcf751ed6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.085762 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8906e470-8b1f-4000-9050-72bdcf751ed6" (UID: "8906e470-8b1f-4000-9050-72bdcf751ed6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.088689 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-scripts" (OuterVolumeSpecName: "scripts") pod "8906e470-8b1f-4000-9050-72bdcf751ed6" (UID: "8906e470-8b1f-4000-9050-72bdcf751ed6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.115575 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8906e470-8b1f-4000-9050-72bdcf751ed6-kube-api-access-fzlx8" (OuterVolumeSpecName: "kube-api-access-fzlx8") pod "8906e470-8b1f-4000-9050-72bdcf751ed6" (UID: "8906e470-8b1f-4000-9050-72bdcf751ed6"). InnerVolumeSpecName "kube-api-access-fzlx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.177535 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzlx8\" (UniqueName: \"kubernetes.io/projected/8906e470-8b1f-4000-9050-72bdcf751ed6-kube-api-access-fzlx8\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.177571 4750 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.177581 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.177588 4750 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8906e470-8b1f-4000-9050-72bdcf751ed6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.202377 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8906e470-8b1f-4000-9050-72bdcf751ed6" (UID: "8906e470-8b1f-4000-9050-72bdcf751ed6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.236000 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8906e470-8b1f-4000-9050-72bdcf751ed6" (UID: "8906e470-8b1f-4000-9050-72bdcf751ed6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.272348 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8906e470-8b1f-4000-9050-72bdcf751ed6" (UID: "8906e470-8b1f-4000-9050-72bdcf751ed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.272720 4750 generic.go:334] "Generic (PLEG): container finished" podID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerID="56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe" exitCode=0 Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.272750 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerDied","Data":"56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe"} Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.273591 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8906e470-8b1f-4000-9050-72bdcf751ed6","Type":"ContainerDied","Data":"1fa035a214b402ef39f02b333d9b9e418cf032ec2af03f1b0101ba2176380d3b"} Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.272837 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.273689 4750 scope.go:117] "RemoveContainer" containerID="0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.280531 4750 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.280562 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.280573 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.320368 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-config-data" (OuterVolumeSpecName: "config-data") pod "8906e470-8b1f-4000-9050-72bdcf751ed6" (UID: "8906e470-8b1f-4000-9050-72bdcf751ed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.382010 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906e470-8b1f-4000-9050-72bdcf751ed6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.570992 4750 scope.go:117] "RemoveContainer" containerID="e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.666502 4750 scope.go:117] "RemoveContainer" containerID="56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.670342 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.696493 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.728462 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:20 crc kubenswrapper[4750]: E0214 14:17:20.729072 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="ceilometer-notification-agent" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.729089 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="ceilometer-notification-agent" Feb 14 14:17:20 crc kubenswrapper[4750]: E0214 14:17:20.729133 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="sg-core" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.729141 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="sg-core" Feb 14 14:17:20 crc kubenswrapper[4750]: E0214 14:17:20.729163 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="proxy-httpd" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.729171 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="proxy-httpd" Feb 14 14:17:20 crc kubenswrapper[4750]: E0214 14:17:20.729202 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="ceilometer-central-agent" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.729210 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="ceilometer-central-agent" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.729460 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="ceilometer-central-agent" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.729483 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="sg-core" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.729503 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="ceilometer-notification-agent" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.729531 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" containerName="proxy-httpd" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.732549 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.742719 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.743489 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.743688 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.783708 4750 scope.go:117] "RemoveContainer" containerID="0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.787964 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8906e470-8b1f-4000-9050-72bdcf751ed6" path="/var/lib/kubelet/pods/8906e470-8b1f-4000-9050-72bdcf751ed6/volumes" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.788957 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.828988 4750 scope.go:117] "RemoveContainer" containerID="0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86" Feb 14 14:17:20 crc kubenswrapper[4750]: E0214 14:17:20.829467 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86\": container with ID starting with 0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86 not found: ID does not exist" containerID="0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.829493 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86"} err="failed to get container status \"0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86\": rpc error: code = NotFound desc = could not find container \"0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86\": container with ID starting with 0dee4a35b80cefb94f92ab519e0eb04a171011f678d8ce624bf9b02108f58b86 not found: ID does not exist" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.829512 4750 scope.go:117] "RemoveContainer" containerID="e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f" Feb 14 14:17:20 crc kubenswrapper[4750]: E0214 14:17:20.829750 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f\": container with ID starting with e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f not found: ID does not exist" containerID="e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.829769 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f"} err="failed to get container status \"e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f\": rpc error: code = NotFound desc = could not find container \"e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f\": container with ID starting with e133a909ad22e9d590886624b3c396f5ddfb471fe5fde8f10ba292c2acfdbd4f not found: ID does not exist" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.829782 4750 scope.go:117] "RemoveContainer" containerID="56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe" Feb 14 14:17:20 crc kubenswrapper[4750]: E0214 14:17:20.829953 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe\": container with ID starting with 56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe not found: ID does not exist" containerID="56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.829969 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe"} err="failed to get container status \"56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe\": rpc error: code = NotFound desc = could not find container \"56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe\": container with ID starting with 56fa53db6a2fccdf0ea117433a20feafd9c3f12edb947b15f1e8f06150ebf8fe not found: ID does not exist" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.829982 4750 scope.go:117] "RemoveContainer" containerID="0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62" Feb 14 14:17:20 crc kubenswrapper[4750]: E0214 14:17:20.830152 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62\": container with ID starting with 0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62 not found: ID does not exist" containerID="0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.830166 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62"} err="failed to get container status \"0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62\": rpc error: code = NotFound desc = could not find container \"0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62\": container with ID starting with 0d050a6553f3b1b3b30508a974580b32e5a9d9a61a8189b58e39a3f85acb5c62 not found: ID does not exist" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.893682 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.893749 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.893768 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-log-httpd\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.893813 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8vzz\" (UniqueName: \"kubernetes.io/projected/fbb075c7-fcf6-49dd-a403-096fba79adef-kube-api-access-g8vzz\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.893830 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.893902 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-scripts\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.893975 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-config-data\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.894014 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-run-httpd\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.995996 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.996042 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.996063 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-log-httpd\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.996147 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8vzz\" (UniqueName: \"kubernetes.io/projected/fbb075c7-fcf6-49dd-a403-096fba79adef-kube-api-access-g8vzz\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.996165 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.996228 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-scripts\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.996312 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-config-data\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.996353 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-run-httpd\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.996500 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-log-httpd\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:20 crc kubenswrapper[4750]: I0214 14:17:20.996662 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-run-httpd\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.001289 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.001537 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-config-data\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.001791 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.003144 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-scripts\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.015943 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.020251 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8vzz\" (UniqueName: \"kubernetes.io/projected/fbb075c7-fcf6-49dd-a403-096fba79adef-kube-api-access-g8vzz\") pod \"ceilometer-0\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " pod="openstack/ceilometer-0" Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.077759 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.289472 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerStarted","Data":"669a74d24cd75ee9580da5894ca6065cc3abd41af346d0db3454521e85cbf15c"} Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.415546 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:21 crc kubenswrapper[4750]: W0214 14:17:21.540336 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbb075c7_fcf6_49dd_a403_096fba79adef.slice/crio-f08268936b2280fbe0d27aa529df3221829df6b8d3369a7090b27a809f408e5d WatchSource:0}: Error finding container f08268936b2280fbe0d27aa529df3221829df6b8d3369a7090b27a809f408e5d: Status 404 returned error can't find the container with id f08268936b2280fbe0d27aa529df3221829df6b8d3369a7090b27a809f408e5d Feb 14 14:17:21 crc kubenswrapper[4750]: I0214 14:17:21.543442 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:22 crc kubenswrapper[4750]: I0214 14:17:22.307271 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerStarted","Data":"f08268936b2280fbe0d27aa529df3221829df6b8d3369a7090b27a809f408e5d"} Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.320233 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerStarted","Data":"e8762905dedd4f2f4110ea6fa9a143703d3ad4e96801a5edfa9017c4d4937f95"} Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.320341 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-api" containerID="cri-o://cf3e810d8fa5a1a38cb5b3f63b4002c1e8c601c544c32044ae266b107ac5e44c" gracePeriod=30 Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.320434 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-listener" containerID="cri-o://e8762905dedd4f2f4110ea6fa9a143703d3ad4e96801a5edfa9017c4d4937f95" gracePeriod=30 Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.320452 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-notifier" containerID="cri-o://669a74d24cd75ee9580da5894ca6065cc3abd41af346d0db3454521e85cbf15c" gracePeriod=30 Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.320462 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-evaluator" containerID="cri-o://b406232ebd5290455415a98e22ff0b8fcc1a078f43da3959cb565ada3963f63a" gracePeriod=30 Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.323587 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerStarted","Data":"048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4"} Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.323633 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerStarted","Data":"862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692"} Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.641407 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.641682 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.655002 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.761344 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 14:17:23 crc kubenswrapper[4750]: I0214 14:17:23.804592 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.20280288 podStartE2EDuration="8.804573479s" podCreationTimestamp="2026-02-14 14:17:15 +0000 UTC" firstStartedPulling="2026-02-14 14:17:16.518745667 +0000 UTC m=+1508.544735138" lastFinishedPulling="2026-02-14 14:17:22.120516256 +0000 UTC m=+1514.146505737" observedRunningTime="2026-02-14 14:17:23.349345092 +0000 UTC m=+1515.375334573" watchObservedRunningTime="2026-02-14 14:17:23.804573479 +0000 UTC m=+1515.830562960" Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.336646 4750 generic.go:334] "Generic (PLEG): container finished" podID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerID="669a74d24cd75ee9580da5894ca6065cc3abd41af346d0db3454521e85cbf15c" exitCode=0 Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.336968 4750 generic.go:334] "Generic (PLEG): container finished" podID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerID="b406232ebd5290455415a98e22ff0b8fcc1a078f43da3959cb565ada3963f63a" exitCode=0 Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.336729 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerDied","Data":"669a74d24cd75ee9580da5894ca6065cc3abd41af346d0db3454521e85cbf15c"} Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.337013 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerDied","Data":"b406232ebd5290455415a98e22ff0b8fcc1a078f43da3959cb565ada3963f63a"} Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.337025 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerDied","Data":"cf3e810d8fa5a1a38cb5b3f63b4002c1e8c601c544c32044ae266b107ac5e44c"} Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.336977 4750 generic.go:334] "Generic (PLEG): container finished" podID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerID="cf3e810d8fa5a1a38cb5b3f63b4002c1e8c601c544c32044ae266b107ac5e44c" exitCode=0 Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.338856 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerStarted","Data":"99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1"} Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.372172 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.724317 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 14:17:24 crc kubenswrapper[4750]: I0214 14:17:24.724340 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 14:17:25 crc kubenswrapper[4750]: I0214 14:17:25.351935 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerStarted","Data":"5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028"} Feb 14 14:17:25 crc kubenswrapper[4750]: I0214 14:17:25.352524 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="ceilometer-central-agent" containerID="cri-o://862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692" gracePeriod=30 Feb 14 14:17:25 crc kubenswrapper[4750]: I0214 14:17:25.352829 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="proxy-httpd" containerID="cri-o://5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028" gracePeriod=30 Feb 14 14:17:25 crc kubenswrapper[4750]: I0214 14:17:25.352926 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="ceilometer-notification-agent" containerID="cri-o://048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4" gracePeriod=30 Feb 14 14:17:25 crc kubenswrapper[4750]: I0214 14:17:25.352974 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="sg-core" containerID="cri-o://99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1" gracePeriod=30 Feb 14 14:17:25 crc kubenswrapper[4750]: I0214 14:17:25.394322 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.858428478 podStartE2EDuration="5.394293847s" podCreationTimestamp="2026-02-14 14:17:20 +0000 UTC" firstStartedPulling="2026-02-14 14:17:21.545013727 +0000 UTC m=+1513.571003198" lastFinishedPulling="2026-02-14 14:17:25.080879086 +0000 UTC m=+1517.106868567" observedRunningTime="2026-02-14 14:17:25.38385462 +0000 UTC m=+1517.409844121" watchObservedRunningTime="2026-02-14 14:17:25.394293847 +0000 UTC m=+1517.420283348" Feb 14 14:17:26 crc kubenswrapper[4750]: I0214 14:17:26.369927 4750 generic.go:334] "Generic (PLEG): container finished" podID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerID="99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1" exitCode=2 Feb 14 14:17:26 crc kubenswrapper[4750]: I0214 14:17:26.370251 4750 generic.go:334] "Generic (PLEG): container finished" podID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerID="048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4" exitCode=0 Feb 14 14:17:26 crc kubenswrapper[4750]: I0214 14:17:26.369998 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerDied","Data":"99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1"} Feb 14 14:17:26 crc kubenswrapper[4750]: I0214 14:17:26.370303 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerDied","Data":"048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4"} Feb 14 14:17:29 crc kubenswrapper[4750]: I0214 14:17:29.419512 4750 generic.go:334] "Generic (PLEG): container finished" podID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerID="862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692" exitCode=0 Feb 14 14:17:29 crc kubenswrapper[4750]: I0214 14:17:29.419597 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerDied","Data":"862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692"} Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.129097 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.129415 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.451012 4750 generic.go:334] "Generic (PLEG): container finished" podID="ec6a6d64-77b8-486c-aa23-3927ca7a820a" containerID="6dca7009d852793796c0b1875030461e6fd6db3e0e9852f0c8b2701670d6cd2a" exitCode=137 Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.451098 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec6a6d64-77b8-486c-aa23-3927ca7a820a","Type":"ContainerDied","Data":"6dca7009d852793796c0b1875030461e6fd6db3e0e9852f0c8b2701670d6cd2a"} Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.451153 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec6a6d64-77b8-486c-aa23-3927ca7a820a","Type":"ContainerDied","Data":"be3507d396bc7106faa716e7ac56ae01762c27d77722284eadd4c1b957eee57c"} Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.451171 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be3507d396bc7106faa716e7ac56ae01762c27d77722284eadd4c1b957eee57c" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.453574 4750 generic.go:334] "Generic (PLEG): container finished" podID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerID="8eb9c995aaba8f88bea8e4f07760a7411747e1b92a0f7e403ed56992a693f69d" exitCode=137 Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.453744 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"acfe2e91-cbbe-4fd4-a194-697f5240263d","Type":"ContainerDied","Data":"8eb9c995aaba8f88bea8e4f07760a7411747e1b92a0f7e403ed56992a693f69d"} Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.453907 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"acfe2e91-cbbe-4fd4-a194-697f5240263d","Type":"ContainerDied","Data":"a7a8cccc5e7bdbf90d2a0ac0819583506f0407c0675d86c050d68993cd02e2b0"} Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.453924 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7a8cccc5e7bdbf90d2a0ac0819583506f0407c0675d86c050d68993cd02e2b0" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.488231 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.493430 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.553735 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-config-data\") pod \"acfe2e91-cbbe-4fd4-a194-697f5240263d\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.553793 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56wk5\" (UniqueName: \"kubernetes.io/projected/ec6a6d64-77b8-486c-aa23-3927ca7a820a-kube-api-access-56wk5\") pod \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.553875 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-combined-ca-bundle\") pod \"acfe2e91-cbbe-4fd4-a194-697f5240263d\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.553943 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-config-data\") pod \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.553994 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbx26\" (UniqueName: \"kubernetes.io/projected/acfe2e91-cbbe-4fd4-a194-697f5240263d-kube-api-access-wbx26\") pod \"acfe2e91-cbbe-4fd4-a194-697f5240263d\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.554031 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acfe2e91-cbbe-4fd4-a194-697f5240263d-logs\") pod \"acfe2e91-cbbe-4fd4-a194-697f5240263d\" (UID: \"acfe2e91-cbbe-4fd4-a194-697f5240263d\") " Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.554153 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-combined-ca-bundle\") pod \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\" (UID: \"ec6a6d64-77b8-486c-aa23-3927ca7a820a\") " Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.555102 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acfe2e91-cbbe-4fd4-a194-697f5240263d-logs" (OuterVolumeSpecName: "logs") pod "acfe2e91-cbbe-4fd4-a194-697f5240263d" (UID: "acfe2e91-cbbe-4fd4-a194-697f5240263d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.569410 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6a6d64-77b8-486c-aa23-3927ca7a820a-kube-api-access-56wk5" (OuterVolumeSpecName: "kube-api-access-56wk5") pod "ec6a6d64-77b8-486c-aa23-3927ca7a820a" (UID: "ec6a6d64-77b8-486c-aa23-3927ca7a820a"). InnerVolumeSpecName "kube-api-access-56wk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.569468 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acfe2e91-cbbe-4fd4-a194-697f5240263d-kube-api-access-wbx26" (OuterVolumeSpecName: "kube-api-access-wbx26") pod "acfe2e91-cbbe-4fd4-a194-697f5240263d" (UID: "acfe2e91-cbbe-4fd4-a194-697f5240263d"). InnerVolumeSpecName "kube-api-access-wbx26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.608705 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acfe2e91-cbbe-4fd4-a194-697f5240263d" (UID: "acfe2e91-cbbe-4fd4-a194-697f5240263d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.609569 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-config-data" (OuterVolumeSpecName: "config-data") pod "ec6a6d64-77b8-486c-aa23-3927ca7a820a" (UID: "ec6a6d64-77b8-486c-aa23-3927ca7a820a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.609588 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec6a6d64-77b8-486c-aa23-3927ca7a820a" (UID: "ec6a6d64-77b8-486c-aa23-3927ca7a820a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.618662 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-config-data" (OuterVolumeSpecName: "config-data") pod "acfe2e91-cbbe-4fd4-a194-697f5240263d" (UID: "acfe2e91-cbbe-4fd4-a194-697f5240263d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.657361 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.657398 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56wk5\" (UniqueName: \"kubernetes.io/projected/ec6a6d64-77b8-486c-aa23-3927ca7a820a-kube-api-access-56wk5\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.657412 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfe2e91-cbbe-4fd4-a194-697f5240263d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.657423 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.657435 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbx26\" (UniqueName: \"kubernetes.io/projected/acfe2e91-cbbe-4fd4-a194-697f5240263d-kube-api-access-wbx26\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.657445 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acfe2e91-cbbe-4fd4-a194-697f5240263d-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:30 crc kubenswrapper[4750]: I0214 14:17:30.657456 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a6d64-77b8-486c-aa23-3927ca7a820a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.466198 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.466237 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.512469 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.526633 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.539393 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.554285 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.567245 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 14:17:31 crc kubenswrapper[4750]: E0214 14:17:31.568306 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerName="nova-metadata-log" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.568453 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerName="nova-metadata-log" Feb 14 14:17:31 crc kubenswrapper[4750]: E0214 14:17:31.568561 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerName="nova-metadata-metadata" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.568663 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerName="nova-metadata-metadata" Feb 14 14:17:31 crc kubenswrapper[4750]: E0214 14:17:31.568792 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6a6d64-77b8-486c-aa23-3927ca7a820a" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.569781 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6a6d64-77b8-486c-aa23-3927ca7a820a" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.570227 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerName="nova-metadata-metadata" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.570351 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6a6d64-77b8-486c-aa23-3927ca7a820a" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.570438 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="acfe2e91-cbbe-4fd4-a194-697f5240263d" containerName="nova-metadata-log" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.571822 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.576668 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.577104 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.577285 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.581715 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.593618 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.599568 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.602426 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.602726 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.610062 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.682744 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.682792 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-config-data\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.682883 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.682954 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.682984 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.683018 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5n2h\" (UniqueName: \"kubernetes.io/projected/7ccaea34-1af0-4e5e-9771-bf53272bab57-kube-api-access-k5n2h\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.683144 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.683166 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44e7a863-8276-4c13-bbd2-8adad31ad437-logs\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.683235 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5ks8\" (UniqueName: \"kubernetes.io/projected/44e7a863-8276-4c13-bbd2-8adad31ad437-kube-api-access-g5ks8\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.683287 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.785284 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.785676 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.785850 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5n2h\" (UniqueName: \"kubernetes.io/projected/7ccaea34-1af0-4e5e-9771-bf53272bab57-kube-api-access-k5n2h\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.786174 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.786322 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44e7a863-8276-4c13-bbd2-8adad31ad437-logs\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.786558 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5ks8\" (UniqueName: \"kubernetes.io/projected/44e7a863-8276-4c13-bbd2-8adad31ad437-kube-api-access-g5ks8\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.786764 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.786942 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.787065 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-config-data\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.787295 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.791989 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.792490 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44e7a863-8276-4c13-bbd2-8adad31ad437-logs\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.798201 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.800664 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.806759 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.829865 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ccaea34-1af0-4e5e-9771-bf53272bab57-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.829874 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-config-data\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.830500 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.843717 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5ks8\" (UniqueName: \"kubernetes.io/projected/44e7a863-8276-4c13-bbd2-8adad31ad437-kube-api-access-g5ks8\") pod \"nova-metadata-0\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " pod="openstack/nova-metadata-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.843771 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5n2h\" (UniqueName: \"kubernetes.io/projected/7ccaea34-1af0-4e5e-9771-bf53272bab57-kube-api-access-k5n2h\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ccaea34-1af0-4e5e-9771-bf53272bab57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.905663 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:31 crc kubenswrapper[4750]: I0214 14:17:31.935704 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:17:32 crc kubenswrapper[4750]: I0214 14:17:32.473632 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:32 crc kubenswrapper[4750]: I0214 14:17:32.482645 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 14:17:32 crc kubenswrapper[4750]: I0214 14:17:32.759830 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acfe2e91-cbbe-4fd4-a194-697f5240263d" path="/var/lib/kubelet/pods/acfe2e91-cbbe-4fd4-a194-697f5240263d/volumes" Feb 14 14:17:32 crc kubenswrapper[4750]: I0214 14:17:32.760546 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6a6d64-77b8-486c-aa23-3927ca7a820a" path="/var/lib/kubelet/pods/ec6a6d64-77b8-486c-aa23-3927ca7a820a/volumes" Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.493462 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44e7a863-8276-4c13-bbd2-8adad31ad437","Type":"ContainerStarted","Data":"56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c"} Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.493787 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44e7a863-8276-4c13-bbd2-8adad31ad437","Type":"ContainerStarted","Data":"3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d"} Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.493806 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44e7a863-8276-4c13-bbd2-8adad31ad437","Type":"ContainerStarted","Data":"6ec3a70e80d1f059c9c14ff4e61386a8850ce8f675e76559536193002c38fd41"} Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.497105 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ccaea34-1af0-4e5e-9771-bf53272bab57","Type":"ContainerStarted","Data":"a5c46d930b19ac05c2961209bfa5730aba3a35f7dc649baae62b07c434a63aa8"} Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.497148 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ccaea34-1af0-4e5e-9771-bf53272bab57","Type":"ContainerStarted","Data":"8a55dabfef02e6ae60d34d95035947b022b1c163365ae2c025b4a7c7c0dfc25d"} Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.524291 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.524263474 podStartE2EDuration="2.524263474s" podCreationTimestamp="2026-02-14 14:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:33.512964313 +0000 UTC m=+1525.538953804" watchObservedRunningTime="2026-02-14 14:17:33.524263474 +0000 UTC m=+1525.550252975" Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.541345 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.54131797 podStartE2EDuration="2.54131797s" podCreationTimestamp="2026-02-14 14:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:33.538772407 +0000 UTC m=+1525.564761928" watchObservedRunningTime="2026-02-14 14:17:33.54131797 +0000 UTC m=+1525.567307471" Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.643651 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.643904 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.644153 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 14:17:33 crc kubenswrapper[4750]: I0214 14:17:33.646341 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.508526 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.513311 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.716277 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qngbv"] Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.718495 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.738445 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qngbv"] Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.761505 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2lbb\" (UniqueName: \"kubernetes.io/projected/2e75deae-648c-49ef-a28b-3025529a7b0c-kube-api-access-z2lbb\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.761831 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.762083 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.762287 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.762404 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.763638 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-config\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.865946 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-config\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.867056 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-config\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.867288 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2lbb\" (UniqueName: \"kubernetes.io/projected/2e75deae-648c-49ef-a28b-3025529a7b0c-kube-api-access-z2lbb\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.867323 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.867998 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.868172 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.868706 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.869070 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.869896 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.869951 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.870510 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:34 crc kubenswrapper[4750]: I0214 14:17:34.887968 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2lbb\" (UniqueName: \"kubernetes.io/projected/2e75deae-648c-49ef-a28b-3025529a7b0c-kube-api-access-z2lbb\") pod \"dnsmasq-dns-6b7bbf7cf9-qngbv\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:35 crc kubenswrapper[4750]: I0214 14:17:35.057468 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:35 crc kubenswrapper[4750]: I0214 14:17:35.623231 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qngbv"] Feb 14 14:17:35 crc kubenswrapper[4750]: W0214 14:17:35.629026 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e75deae_648c_49ef_a28b_3025529a7b0c.slice/crio-3fb3ba20796e536ae98115e975e33798f5a0782aed1892f0a7412b965be7668f WatchSource:0}: Error finding container 3fb3ba20796e536ae98115e975e33798f5a0782aed1892f0a7412b965be7668f: Status 404 returned error can't find the container with id 3fb3ba20796e536ae98115e975e33798f5a0782aed1892f0a7412b965be7668f Feb 14 14:17:36 crc kubenswrapper[4750]: I0214 14:17:36.528370 4750 generic.go:334] "Generic (PLEG): container finished" podID="2e75deae-648c-49ef-a28b-3025529a7b0c" containerID="720066c275090ad483b046df62bafd502b15f2175ceeacfbef11b0d63a7c2419" exitCode=0 Feb 14 14:17:36 crc kubenswrapper[4750]: I0214 14:17:36.529279 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" event={"ID":"2e75deae-648c-49ef-a28b-3025529a7b0c","Type":"ContainerDied","Data":"720066c275090ad483b046df62bafd502b15f2175ceeacfbef11b0d63a7c2419"} Feb 14 14:17:36 crc kubenswrapper[4750]: I0214 14:17:36.529349 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" event={"ID":"2e75deae-648c-49ef-a28b-3025529a7b0c","Type":"ContainerStarted","Data":"3fb3ba20796e536ae98115e975e33798f5a0782aed1892f0a7412b965be7668f"} Feb 14 14:17:36 crc kubenswrapper[4750]: I0214 14:17:36.907137 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:36 crc kubenswrapper[4750]: I0214 14:17:36.936360 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 14:17:36 crc kubenswrapper[4750]: I0214 14:17:36.936416 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 14:17:37 crc kubenswrapper[4750]: I0214 14:17:37.548666 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" event={"ID":"2e75deae-648c-49ef-a28b-3025529a7b0c","Type":"ContainerStarted","Data":"280dc5d6c9a7128284ab012ac69d59a5f12f047c503d5a32ea999c2c535e060e"} Feb 14 14:17:37 crc kubenswrapper[4750]: I0214 14:17:37.549283 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:37 crc kubenswrapper[4750]: I0214 14:17:37.578072 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" podStartSLOduration=3.578050514 podStartE2EDuration="3.578050514s" podCreationTimestamp="2026-02-14 14:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:37.574666388 +0000 UTC m=+1529.600655869" watchObservedRunningTime="2026-02-14 14:17:37.578050514 +0000 UTC m=+1529.604040005" Feb 14 14:17:37 crc kubenswrapper[4750]: I0214 14:17:37.965833 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:37 crc kubenswrapper[4750]: I0214 14:17:37.966057 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-log" containerID="cri-o://faf2066662005fc0b096e46726c0c9001ea538c0bcb30924aed86e8ddcba769d" gracePeriod=30 Feb 14 14:17:37 crc kubenswrapper[4750]: I0214 14:17:37.966175 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-api" containerID="cri-o://9c09008851f386a37fbf7ca0119cdf4ad61024dfee3600e57a1e0619609979fc" gracePeriod=30 Feb 14 14:17:38 crc kubenswrapper[4750]: I0214 14:17:38.563984 4750 generic.go:334] "Generic (PLEG): container finished" podID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerID="faf2066662005fc0b096e46726c0c9001ea538c0bcb30924aed86e8ddcba769d" exitCode=143 Feb 14 14:17:38 crc kubenswrapper[4750]: I0214 14:17:38.564071 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e65daf92-2b4a-4b12-9cef-8b8267dc13cf","Type":"ContainerDied","Data":"faf2066662005fc0b096e46726c0c9001ea538c0bcb30924aed86e8ddcba769d"} Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.602043 4750 generic.go:334] "Generic (PLEG): container finished" podID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerID="9c09008851f386a37fbf7ca0119cdf4ad61024dfee3600e57a1e0619609979fc" exitCode=0 Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.602124 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e65daf92-2b4a-4b12-9cef-8b8267dc13cf","Type":"ContainerDied","Data":"9c09008851f386a37fbf7ca0119cdf4ad61024dfee3600e57a1e0619609979fc"} Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.602569 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e65daf92-2b4a-4b12-9cef-8b8267dc13cf","Type":"ContainerDied","Data":"cbd1d611e1bc6f4c561c1a7f1fa897b6698373110891191d69262b3de21f87c5"} Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.602581 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbd1d611e1bc6f4c561c1a7f1fa897b6698373110891191d69262b3de21f87c5" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.656859 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.751405 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-config-data\") pod \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.751509 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-778k6\" (UniqueName: \"kubernetes.io/projected/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-kube-api-access-778k6\") pod \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.751705 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-logs\") pod \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.751800 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-combined-ca-bundle\") pod \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\" (UID: \"e65daf92-2b4a-4b12-9cef-8b8267dc13cf\") " Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.753768 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-logs" (OuterVolumeSpecName: "logs") pod "e65daf92-2b4a-4b12-9cef-8b8267dc13cf" (UID: "e65daf92-2b4a-4b12-9cef-8b8267dc13cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.789328 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-config-data" (OuterVolumeSpecName: "config-data") pod "e65daf92-2b4a-4b12-9cef-8b8267dc13cf" (UID: "e65daf92-2b4a-4b12-9cef-8b8267dc13cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.793301 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-kube-api-access-778k6" (OuterVolumeSpecName: "kube-api-access-778k6") pod "e65daf92-2b4a-4b12-9cef-8b8267dc13cf" (UID: "e65daf92-2b4a-4b12-9cef-8b8267dc13cf"). InnerVolumeSpecName "kube-api-access-778k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.802201 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e65daf92-2b4a-4b12-9cef-8b8267dc13cf" (UID: "e65daf92-2b4a-4b12-9cef-8b8267dc13cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.854549 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.854577 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.854588 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.854599 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-778k6\" (UniqueName: \"kubernetes.io/projected/e65daf92-2b4a-4b12-9cef-8b8267dc13cf-kube-api-access-778k6\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.907795 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.928866 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.938079 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 14:17:41 crc kubenswrapper[4750]: I0214 14:17:41.938146 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.616187 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.634547 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.650914 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.667669 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.701705 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:42 crc kubenswrapper[4750]: E0214 14:17:42.702262 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-log" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.702276 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-log" Feb 14 14:17:42 crc kubenswrapper[4750]: E0214 14:17:42.702325 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-api" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.702332 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-api" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.702528 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-api" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.702580 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" containerName="nova-api-log" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.705966 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.708564 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.708777 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.708902 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.719151 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.782202 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.782274 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72790c93-26cd-4d1c-b964-8b86dc8afc8b-logs\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.782324 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-public-tls-certs\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.782352 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbg9l\" (UniqueName: \"kubernetes.io/projected/72790c93-26cd-4d1c-b964-8b86dc8afc8b-kube-api-access-tbg9l\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.782481 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.782554 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-config-data\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.791315 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65daf92-2b4a-4b12-9cef-8b8267dc13cf" path="/var/lib/kubelet/pods/e65daf92-2b4a-4b12-9cef-8b8267dc13cf/volumes" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.884193 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.884308 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-config-data\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.884395 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.884428 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72790c93-26cd-4d1c-b964-8b86dc8afc8b-logs\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.884473 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-public-tls-certs\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.884492 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbg9l\" (UniqueName: \"kubernetes.io/projected/72790c93-26cd-4d1c-b964-8b86dc8afc8b-kube-api-access-tbg9l\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.887624 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4c8dd"] Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.889646 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.894294 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72790c93-26cd-4d1c-b964-8b86dc8afc8b-logs\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.895568 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.897083 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.897204 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.898741 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-public-tls-certs\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.902292 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.904991 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4c8dd"] Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.906218 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-config-data\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.927353 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbg9l\" (UniqueName: \"kubernetes.io/projected/72790c93-26cd-4d1c-b964-8b86dc8afc8b-kube-api-access-tbg9l\") pod \"nova-api-0\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " pod="openstack/nova-api-0" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.957331 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.957379 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.986563 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-config-data\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.986732 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-scripts\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.986762 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwnw\" (UniqueName: \"kubernetes.io/projected/08b0dd23-b026-4e72-b924-e6a73fea0c09-kube-api-access-xnwnw\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:42 crc kubenswrapper[4750]: I0214 14:17:42.986834 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.040061 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.088673 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-scripts\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.088750 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwnw\" (UniqueName: \"kubernetes.io/projected/08b0dd23-b026-4e72-b924-e6a73fea0c09-kube-api-access-xnwnw\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.088858 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.088907 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-config-data\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.093054 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-scripts\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.093413 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-config-data\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.093524 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.106432 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwnw\" (UniqueName: \"kubernetes.io/projected/08b0dd23-b026-4e72-b924-e6a73fea0c09-kube-api-access-xnwnw\") pod \"nova-cell1-cell-mapping-4c8dd\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.344687 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.709331 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:43 crc kubenswrapper[4750]: W0214 14:17:43.939686 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08b0dd23_b026_4e72_b924_e6a73fea0c09.slice/crio-ab31e3de66e4648d7e7f1b7e0420a793612c797eccb958d861de932d0234512f WatchSource:0}: Error finding container ab31e3de66e4648d7e7f1b7e0420a793612c797eccb958d861de932d0234512f: Status 404 returned error can't find the container with id ab31e3de66e4648d7e7f1b7e0420a793612c797eccb958d861de932d0234512f Feb 14 14:17:43 crc kubenswrapper[4750]: I0214 14:17:43.944289 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4c8dd"] Feb 14 14:17:44 crc kubenswrapper[4750]: I0214 14:17:44.648546 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72790c93-26cd-4d1c-b964-8b86dc8afc8b","Type":"ContainerStarted","Data":"fe1e35d0b7cab885f948884d76bc5e2f660adbcba68c0a4ac6f6677f2be1df16"} Feb 14 14:17:44 crc kubenswrapper[4750]: I0214 14:17:44.649056 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72790c93-26cd-4d1c-b964-8b86dc8afc8b","Type":"ContainerStarted","Data":"99e75f767f9a85b267711f79f05532a03911ace516e341c7374849148aa09fbb"} Feb 14 14:17:44 crc kubenswrapper[4750]: I0214 14:17:44.649068 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72790c93-26cd-4d1c-b964-8b86dc8afc8b","Type":"ContainerStarted","Data":"ec43572dcbb0d30cc4431c7fd81233f806825d2a81cf2fb5f41b270420decaa7"} Feb 14 14:17:44 crc kubenswrapper[4750]: I0214 14:17:44.650914 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4c8dd" event={"ID":"08b0dd23-b026-4e72-b924-e6a73fea0c09","Type":"ContainerStarted","Data":"840b14f06790bd205d04be74420fa875e0829b2b0edb5e725bf4b3f4cc8bb918"} Feb 14 14:17:44 crc kubenswrapper[4750]: I0214 14:17:44.650961 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4c8dd" event={"ID":"08b0dd23-b026-4e72-b924-e6a73fea0c09","Type":"ContainerStarted","Data":"ab31e3de66e4648d7e7f1b7e0420a793612c797eccb958d861de932d0234512f"} Feb 14 14:17:44 crc kubenswrapper[4750]: I0214 14:17:44.717618 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.717595992 podStartE2EDuration="2.717595992s" podCreationTimestamp="2026-02-14 14:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:44.683632756 +0000 UTC m=+1536.709622237" watchObservedRunningTime="2026-02-14 14:17:44.717595992 +0000 UTC m=+1536.743585473" Feb 14 14:17:44 crc kubenswrapper[4750]: I0214 14:17:44.718442 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4c8dd" podStartSLOduration=2.718434616 podStartE2EDuration="2.718434616s" podCreationTimestamp="2026-02-14 14:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:44.710938543 +0000 UTC m=+1536.736928024" watchObservedRunningTime="2026-02-14 14:17:44.718434616 +0000 UTC m=+1536.744424097" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.060268 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.126735 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-74l7d"] Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.127018 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" podUID="345928e8-5155-49e5-9159-7f9142ee2dd0" containerName="dnsmasq-dns" containerID="cri-o://37518088551d5e11c890ab9e91837281bbb8cbeac7b34184961bdb2fb114c93f" gracePeriod=10 Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.666217 4750 generic.go:334] "Generic (PLEG): container finished" podID="345928e8-5155-49e5-9159-7f9142ee2dd0" containerID="37518088551d5e11c890ab9e91837281bbb8cbeac7b34184961bdb2fb114c93f" exitCode=0 Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.666986 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" event={"ID":"345928e8-5155-49e5-9159-7f9142ee2dd0","Type":"ContainerDied","Data":"37518088551d5e11c890ab9e91837281bbb8cbeac7b34184961bdb2fb114c93f"} Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.794899 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.876009 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-nb\") pod \"345928e8-5155-49e5-9159-7f9142ee2dd0\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.876177 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-svc\") pod \"345928e8-5155-49e5-9159-7f9142ee2dd0\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.876213 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t566s\" (UniqueName: \"kubernetes.io/projected/345928e8-5155-49e5-9159-7f9142ee2dd0-kube-api-access-t566s\") pod \"345928e8-5155-49e5-9159-7f9142ee2dd0\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.876263 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-config\") pod \"345928e8-5155-49e5-9159-7f9142ee2dd0\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.876352 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-swift-storage-0\") pod \"345928e8-5155-49e5-9159-7f9142ee2dd0\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.876476 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-sb\") pod \"345928e8-5155-49e5-9159-7f9142ee2dd0\" (UID: \"345928e8-5155-49e5-9159-7f9142ee2dd0\") " Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.908986 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345928e8-5155-49e5-9159-7f9142ee2dd0-kube-api-access-t566s" (OuterVolumeSpecName: "kube-api-access-t566s") pod "345928e8-5155-49e5-9159-7f9142ee2dd0" (UID: "345928e8-5155-49e5-9159-7f9142ee2dd0"). InnerVolumeSpecName "kube-api-access-t566s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.957746 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "345928e8-5155-49e5-9159-7f9142ee2dd0" (UID: "345928e8-5155-49e5-9159-7f9142ee2dd0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.959628 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-config" (OuterVolumeSpecName: "config") pod "345928e8-5155-49e5-9159-7f9142ee2dd0" (UID: "345928e8-5155-49e5-9159-7f9142ee2dd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.963065 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "345928e8-5155-49e5-9159-7f9142ee2dd0" (UID: "345928e8-5155-49e5-9159-7f9142ee2dd0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.968426 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "345928e8-5155-49e5-9159-7f9142ee2dd0" (UID: "345928e8-5155-49e5-9159-7f9142ee2dd0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.974446 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "345928e8-5155-49e5-9159-7f9142ee2dd0" (UID: "345928e8-5155-49e5-9159-7f9142ee2dd0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.980897 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.980938 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t566s\" (UniqueName: \"kubernetes.io/projected/345928e8-5155-49e5-9159-7f9142ee2dd0-kube-api-access-t566s\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.980953 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.980969 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.980981 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:45 crc kubenswrapper[4750]: I0214 14:17:45.980992 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/345928e8-5155-49e5-9159-7f9142ee2dd0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:46 crc kubenswrapper[4750]: I0214 14:17:46.684045 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" event={"ID":"345928e8-5155-49e5-9159-7f9142ee2dd0","Type":"ContainerDied","Data":"764b0d950da4c4c22a73328bfeac98825fcd874aa1b9950757f3394272f4848d"} Feb 14 14:17:46 crc kubenswrapper[4750]: I0214 14:17:46.684359 4750 scope.go:117] "RemoveContainer" containerID="37518088551d5e11c890ab9e91837281bbb8cbeac7b34184961bdb2fb114c93f" Feb 14 14:17:46 crc kubenswrapper[4750]: I0214 14:17:46.684554 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-74l7d" Feb 14 14:17:46 crc kubenswrapper[4750]: I0214 14:17:46.761228 4750 scope.go:117] "RemoveContainer" containerID="22f7f9617c0101e3b17f5426e1b09bc3d7b7fee6499b0873bd94730643a13eda" Feb 14 14:17:46 crc kubenswrapper[4750]: I0214 14:17:46.780334 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-74l7d"] Feb 14 14:17:46 crc kubenswrapper[4750]: I0214 14:17:46.780595 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-74l7d"] Feb 14 14:17:48 crc kubenswrapper[4750]: I0214 14:17:48.757331 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345928e8-5155-49e5-9159-7f9142ee2dd0" path="/var/lib/kubelet/pods/345928e8-5155-49e5-9159-7f9142ee2dd0/volumes" Feb 14 14:17:49 crc kubenswrapper[4750]: I0214 14:17:49.736255 4750 generic.go:334] "Generic (PLEG): container finished" podID="08b0dd23-b026-4e72-b924-e6a73fea0c09" containerID="840b14f06790bd205d04be74420fa875e0829b2b0edb5e725bf4b3f4cc8bb918" exitCode=0 Feb 14 14:17:49 crc kubenswrapper[4750]: I0214 14:17:49.736343 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4c8dd" event={"ID":"08b0dd23-b026-4e72-b924-e6a73fea0c09","Type":"ContainerDied","Data":"840b14f06790bd205d04be74420fa875e0829b2b0edb5e725bf4b3f4cc8bb918"} Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.078790 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.098677 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.346254 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.516975 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnwnw\" (UniqueName: \"kubernetes.io/projected/08b0dd23-b026-4e72-b924-e6a73fea0c09-kube-api-access-xnwnw\") pod \"08b0dd23-b026-4e72-b924-e6a73fea0c09\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.517203 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-scripts\") pod \"08b0dd23-b026-4e72-b924-e6a73fea0c09\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.517334 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-combined-ca-bundle\") pod \"08b0dd23-b026-4e72-b924-e6a73fea0c09\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.517486 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-config-data\") pod \"08b0dd23-b026-4e72-b924-e6a73fea0c09\" (UID: \"08b0dd23-b026-4e72-b924-e6a73fea0c09\") " Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.522855 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b0dd23-b026-4e72-b924-e6a73fea0c09-kube-api-access-xnwnw" (OuterVolumeSpecName: "kube-api-access-xnwnw") pod "08b0dd23-b026-4e72-b924-e6a73fea0c09" (UID: "08b0dd23-b026-4e72-b924-e6a73fea0c09"). InnerVolumeSpecName "kube-api-access-xnwnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.524257 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-scripts" (OuterVolumeSpecName: "scripts") pod "08b0dd23-b026-4e72-b924-e6a73fea0c09" (UID: "08b0dd23-b026-4e72-b924-e6a73fea0c09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.550836 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-config-data" (OuterVolumeSpecName: "config-data") pod "08b0dd23-b026-4e72-b924-e6a73fea0c09" (UID: "08b0dd23-b026-4e72-b924-e6a73fea0c09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.555158 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08b0dd23-b026-4e72-b924-e6a73fea0c09" (UID: "08b0dd23-b026-4e72-b924-e6a73fea0c09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.621624 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnwnw\" (UniqueName: \"kubernetes.io/projected/08b0dd23-b026-4e72-b924-e6a73fea0c09-kube-api-access-xnwnw\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.621661 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.621670 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.621679 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b0dd23-b026-4e72-b924-e6a73fea0c09-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.770266 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4c8dd" event={"ID":"08b0dd23-b026-4e72-b924-e6a73fea0c09","Type":"ContainerDied","Data":"ab31e3de66e4648d7e7f1b7e0420a793612c797eccb958d861de932d0234512f"} Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.770339 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab31e3de66e4648d7e7f1b7e0420a793612c797eccb958d861de932d0234512f" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.770446 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4c8dd" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.944531 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.948887 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 14:17:51 crc kubenswrapper[4750]: I0214 14:17:51.956178 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.039827 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.040070 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerName="nova-api-log" containerID="cri-o://99e75f767f9a85b267711f79f05532a03911ace516e341c7374849148aa09fbb" gracePeriod=30 Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.040573 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerName="nova-api-api" containerID="cri-o://fe1e35d0b7cab885f948884d76bc5e2f660adbcba68c0a4ac6f6677f2be1df16" gracePeriod=30 Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.070323 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.070557 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" containerName="nova-scheduler-scheduler" containerID="cri-o://0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0" gracePeriod=30 Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.085037 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.806924 4750 generic.go:334] "Generic (PLEG): container finished" podID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerID="fe1e35d0b7cab885f948884d76bc5e2f660adbcba68c0a4ac6f6677f2be1df16" exitCode=0 Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.807293 4750 generic.go:334] "Generic (PLEG): container finished" podID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerID="99e75f767f9a85b267711f79f05532a03911ace516e341c7374849148aa09fbb" exitCode=143 Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.807261 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72790c93-26cd-4d1c-b964-8b86dc8afc8b","Type":"ContainerDied","Data":"fe1e35d0b7cab885f948884d76bc5e2f660adbcba68c0a4ac6f6677f2be1df16"} Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.807463 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72790c93-26cd-4d1c-b964-8b86dc8afc8b","Type":"ContainerDied","Data":"99e75f767f9a85b267711f79f05532a03911ace516e341c7374849148aa09fbb"} Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.807505 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72790c93-26cd-4d1c-b964-8b86dc8afc8b","Type":"ContainerDied","Data":"ec43572dcbb0d30cc4431c7fd81233f806825d2a81cf2fb5f41b270420decaa7"} Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.807520 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec43572dcbb0d30cc4431c7fd81233f806825d2a81cf2fb5f41b270420decaa7" Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.818062 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 14:17:52 crc kubenswrapper[4750]: I0214 14:17:52.899701 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.065834 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-public-tls-certs\") pod \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.066334 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbg9l\" (UniqueName: \"kubernetes.io/projected/72790c93-26cd-4d1c-b964-8b86dc8afc8b-kube-api-access-tbg9l\") pod \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.066498 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-internal-tls-certs\") pod \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.066589 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-combined-ca-bundle\") pod \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.066625 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72790c93-26cd-4d1c-b964-8b86dc8afc8b-logs\") pod \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.066662 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-config-data\") pod \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\" (UID: \"72790c93-26cd-4d1c-b964-8b86dc8afc8b\") " Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.067441 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72790c93-26cd-4d1c-b964-8b86dc8afc8b-logs" (OuterVolumeSpecName: "logs") pod "72790c93-26cd-4d1c-b964-8b86dc8afc8b" (UID: "72790c93-26cd-4d1c-b964-8b86dc8afc8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.068020 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72790c93-26cd-4d1c-b964-8b86dc8afc8b-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.076528 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72790c93-26cd-4d1c-b964-8b86dc8afc8b-kube-api-access-tbg9l" (OuterVolumeSpecName: "kube-api-access-tbg9l") pod "72790c93-26cd-4d1c-b964-8b86dc8afc8b" (UID: "72790c93-26cd-4d1c-b964-8b86dc8afc8b"). InnerVolumeSpecName "kube-api-access-tbg9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.114026 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72790c93-26cd-4d1c-b964-8b86dc8afc8b" (UID: "72790c93-26cd-4d1c-b964-8b86dc8afc8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.120322 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-config-data" (OuterVolumeSpecName: "config-data") pod "72790c93-26cd-4d1c-b964-8b86dc8afc8b" (UID: "72790c93-26cd-4d1c-b964-8b86dc8afc8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.146914 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "72790c93-26cd-4d1c-b964-8b86dc8afc8b" (UID: "72790c93-26cd-4d1c-b964-8b86dc8afc8b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.163651 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "72790c93-26cd-4d1c-b964-8b86dc8afc8b" (UID: "72790c93-26cd-4d1c-b964-8b86dc8afc8b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.170900 4750 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.171081 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbg9l\" (UniqueName: \"kubernetes.io/projected/72790c93-26cd-4d1c-b964-8b86dc8afc8b-kube-api-access-tbg9l\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.171220 4750 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.171337 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.171437 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72790c93-26cd-4d1c-b964-8b86dc8afc8b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.654416 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0 is running failed: container process not found" containerID="0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.655271 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0 is running failed: container process not found" containerID="0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.655794 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0 is running failed: container process not found" containerID="0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.655913 4750 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" containerName="nova-scheduler-scheduler" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.823864 4750 generic.go:334] "Generic (PLEG): container finished" podID="9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" containerID="0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0" exitCode=0 Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.823915 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50","Type":"ContainerDied","Data":"0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0"} Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.823955 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50","Type":"ContainerDied","Data":"cc17e6fa708cdb5e3936f6f24760043f3274a52f4ffcf455c9d903097472b71f"} Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.823997 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc17e6fa708cdb5e3936f6f24760043f3274a52f4ffcf455c9d903097472b71f" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.828455 4750 generic.go:334] "Generic (PLEG): container finished" podID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerID="e8762905dedd4f2f4110ea6fa9a143703d3ad4e96801a5edfa9017c4d4937f95" exitCode=137 Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.828550 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.828569 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerDied","Data":"e8762905dedd4f2f4110ea6fa9a143703d3ad4e96801a5edfa9017c4d4937f95"} Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.829487 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-log" containerID="cri-o://3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d" gracePeriod=30 Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.829624 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-metadata" containerID="cri-o://56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c" gracePeriod=30 Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.830555 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.892068 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.912144 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.932342 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.933042 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345928e8-5155-49e5-9159-7f9142ee2dd0" containerName="init" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933066 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="345928e8-5155-49e5-9159-7f9142ee2dd0" containerName="init" Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.933085 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345928e8-5155-49e5-9159-7f9142ee2dd0" containerName="dnsmasq-dns" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933093 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="345928e8-5155-49e5-9159-7f9142ee2dd0" containerName="dnsmasq-dns" Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.933126 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerName="nova-api-api" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933135 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerName="nova-api-api" Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.933158 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" containerName="nova-scheduler-scheduler" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933166 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" containerName="nova-scheduler-scheduler" Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.933182 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerName="nova-api-log" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933190 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerName="nova-api-log" Feb 14 14:17:53 crc kubenswrapper[4750]: E0214 14:17:53.933212 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b0dd23-b026-4e72-b924-e6a73fea0c09" containerName="nova-manage" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933219 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b0dd23-b026-4e72-b924-e6a73fea0c09" containerName="nova-manage" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933531 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" containerName="nova-scheduler-scheduler" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933573 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b0dd23-b026-4e72-b924-e6a73fea0c09" containerName="nova-manage" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933592 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="345928e8-5155-49e5-9159-7f9142ee2dd0" containerName="dnsmasq-dns" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933607 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerName="nova-api-log" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.933621 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" containerName="nova-api-api" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.935438 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.942642 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.942657 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.942873 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.943474 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.948837 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.989945 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-config-data\") pod \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.991037 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4cfr\" (UniqueName: \"kubernetes.io/projected/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-kube-api-access-z4cfr\") pod \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.991733 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-combined-ca-bundle\") pod \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\" (UID: \"9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50\") " Feb 14 14:17:53 crc kubenswrapper[4750]: I0214 14:17:53.996828 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-kube-api-access-z4cfr" (OuterVolumeSpecName: "kube-api-access-z4cfr") pod "9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" (UID: "9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50"). InnerVolumeSpecName "kube-api-access-z4cfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.020909 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" (UID: "9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.022659 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-config-data" (OuterVolumeSpecName: "config-data") pod "9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" (UID: "9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.095980 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pg8n\" (UniqueName: \"kubernetes.io/projected/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-kube-api-access-5pg8n\") pod \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.096099 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-scripts\") pod \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.096181 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-combined-ca-bundle\") pod \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.096653 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-config-data\") pod \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\" (UID: \"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1\") " Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.097616 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.098022 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6845c\" (UniqueName: \"kubernetes.io/projected/8ed4f6ec-3953-48de-a051-2af04cdafeb4-kube-api-access-6845c\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.098080 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.098288 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-config-data\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.098316 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-public-tls-certs\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.098471 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ed4f6ec-3953-48de-a051-2af04cdafeb4-logs\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.098633 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4cfr\" (UniqueName: \"kubernetes.io/projected/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-kube-api-access-z4cfr\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.098651 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.098661 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.099573 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-scripts" (OuterVolumeSpecName: "scripts") pod "29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" (UID: "29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.099997 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-kube-api-access-5pg8n" (OuterVolumeSpecName: "kube-api-access-5pg8n") pod "29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" (UID: "29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1"). InnerVolumeSpecName "kube-api-access-5pg8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.200923 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6845c\" (UniqueName: \"kubernetes.io/projected/8ed4f6ec-3953-48de-a051-2af04cdafeb4-kube-api-access-6845c\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.200980 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.201060 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-config-data\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.201080 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-public-tls-certs\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.201183 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ed4f6ec-3953-48de-a051-2af04cdafeb4-logs\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.201232 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.201445 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.201462 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pg8n\" (UniqueName: \"kubernetes.io/projected/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-kube-api-access-5pg8n\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.201817 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ed4f6ec-3953-48de-a051-2af04cdafeb4-logs\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.204867 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.204872 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.205388 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-config-data\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.205560 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed4f6ec-3953-48de-a051-2af04cdafeb4-public-tls-certs\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.213843 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-config-data" (OuterVolumeSpecName: "config-data") pod "29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" (UID: "29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.216917 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6845c\" (UniqueName: \"kubernetes.io/projected/8ed4f6ec-3953-48de-a051-2af04cdafeb4-kube-api-access-6845c\") pod \"nova-api-0\" (UID: \"8ed4f6ec-3953-48de-a051-2af04cdafeb4\") " pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.228162 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" (UID: "29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.269229 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.303475 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.303509 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.735781 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.755125 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72790c93-26cd-4d1c-b964-8b86dc8afc8b" path="/var/lib/kubelet/pods/72790c93-26cd-4d1c-b964-8b86dc8afc8b/volumes" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.851990 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1","Type":"ContainerDied","Data":"90d46724bc5606130b11a80809415823ef8acc22c9b5c8032e0042d9420ebd49"} Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.852058 4750 scope.go:117] "RemoveContainer" containerID="e8762905dedd4f2f4110ea6fa9a143703d3ad4e96801a5edfa9017c4d4937f95" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.852284 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.858537 4750 generic.go:334] "Generic (PLEG): container finished" podID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerID="3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d" exitCode=143 Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.858596 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44e7a863-8276-4c13-bbd2-8adad31ad437","Type":"ContainerDied","Data":"3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d"} Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.861419 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.863470 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ed4f6ec-3953-48de-a051-2af04cdafeb4","Type":"ContainerStarted","Data":"00899c2ff55682dcc80a436391475749adb741170ee61080a6cf502ab7b3984f"} Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.907942 4750 scope.go:117] "RemoveContainer" containerID="669a74d24cd75ee9580da5894ca6065cc3abd41af346d0db3454521e85cbf15c" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.934794 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.954314 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.956448 4750 scope.go:117] "RemoveContainer" containerID="b406232ebd5290455415a98e22ff0b8fcc1a078f43da3959cb565ada3963f63a" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.964288 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.984962 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.987486 4750 scope.go:117] "RemoveContainer" containerID="cf3e810d8fa5a1a38cb5b3f63b4002c1e8c601c544c32044ae266b107ac5e44c" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.995342 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 14:17:54 crc kubenswrapper[4750]: E0214 14:17:54.995927 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-notifier" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.995949 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-notifier" Feb 14 14:17:54 crc kubenswrapper[4750]: E0214 14:17:54.995968 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-evaluator" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.995978 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-evaluator" Feb 14 14:17:54 crc kubenswrapper[4750]: E0214 14:17:54.996016 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-listener" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.996025 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-listener" Feb 14 14:17:54 crc kubenswrapper[4750]: E0214 14:17:54.996044 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-api" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.996053 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-api" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.996368 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-api" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.996402 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-evaluator" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.996419 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-listener" Feb 14 14:17:54 crc kubenswrapper[4750]: I0214 14:17:54.996435 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" containerName="aodh-notifier" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.000333 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.002731 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.002830 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-ktj55" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.002913 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.002962 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.004644 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.005751 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.007950 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.009605 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.030384 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.045740 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.123997 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-public-tls-certs\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.124054 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7209e1-923f-4507-8103-2020a196059f-config-data\") pod \"nova-scheduler-0\" (UID: \"2f7209e1-923f-4507-8103-2020a196059f\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.124091 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-scripts\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.124188 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-config-data\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.124243 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-internal-tls-certs\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.124320 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7209e1-923f-4507-8103-2020a196059f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2f7209e1-923f-4507-8103-2020a196059f\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.124371 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nsjq\" (UniqueName: \"kubernetes.io/projected/2f7209e1-923f-4507-8103-2020a196059f-kube-api-access-5nsjq\") pod \"nova-scheduler-0\" (UID: \"2f7209e1-923f-4507-8103-2020a196059f\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.124414 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.124507 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s79q\" (UniqueName: \"kubernetes.io/projected/63273a72-38f5-4186-8e5f-ac589414fbd6-kube-api-access-5s79q\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.226379 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s79q\" (UniqueName: \"kubernetes.io/projected/63273a72-38f5-4186-8e5f-ac589414fbd6-kube-api-access-5s79q\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.226504 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-public-tls-certs\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.226529 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7209e1-923f-4507-8103-2020a196059f-config-data\") pod \"nova-scheduler-0\" (UID: \"2f7209e1-923f-4507-8103-2020a196059f\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.226556 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-scripts\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.226592 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-config-data\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.226631 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-internal-tls-certs\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.226690 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7209e1-923f-4507-8103-2020a196059f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2f7209e1-923f-4507-8103-2020a196059f\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.226728 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nsjq\" (UniqueName: \"kubernetes.io/projected/2f7209e1-923f-4507-8103-2020a196059f-kube-api-access-5nsjq\") pod \"nova-scheduler-0\" (UID: \"2f7209e1-923f-4507-8103-2020a196059f\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.226770 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.231546 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7209e1-923f-4507-8103-2020a196059f-config-data\") pod \"nova-scheduler-0\" (UID: \"2f7209e1-923f-4507-8103-2020a196059f\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.232415 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7209e1-923f-4507-8103-2020a196059f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2f7209e1-923f-4507-8103-2020a196059f\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.232848 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-config-data\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.233032 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-public-tls-certs\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.233265 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-scripts\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.233464 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.235578 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-internal-tls-certs\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.246642 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s79q\" (UniqueName: \"kubernetes.io/projected/63273a72-38f5-4186-8e5f-ac589414fbd6-kube-api-access-5s79q\") pod \"aodh-0\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.253540 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nsjq\" (UniqueName: \"kubernetes.io/projected/2f7209e1-923f-4507-8103-2020a196059f-kube-api-access-5nsjq\") pod \"nova-scheduler-0\" (UID: \"2f7209e1-923f-4507-8103-2020a196059f\") " pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.329224 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.474754 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.838527 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.878388 4750 generic.go:334] "Generic (PLEG): container finished" podID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerID="5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028" exitCode=137 Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.878453 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerDied","Data":"5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028"} Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.878525 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbb075c7-fcf6-49dd-a403-096fba79adef","Type":"ContainerDied","Data":"f08268936b2280fbe0d27aa529df3221829df6b8d3369a7090b27a809f408e5d"} Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.878547 4750 scope.go:117] "RemoveContainer" containerID="5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.878759 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.881655 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ed4f6ec-3953-48de-a051-2af04cdafeb4","Type":"ContainerStarted","Data":"cdb625310204c186a7ac2590bea9efd4d82540ec687b16109170aa1a39290bcb"} Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.881695 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ed4f6ec-3953-48de-a051-2af04cdafeb4","Type":"ContainerStarted","Data":"d598544765d9e980cdffe2f356a73314c21e041180db47cc9c1c925b59294ba9"} Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.910087 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9100700269999997 podStartE2EDuration="2.910070027s" podCreationTimestamp="2026-02-14 14:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:55.904269692 +0000 UTC m=+1547.930259173" watchObservedRunningTime="2026-02-14 14:17:55.910070027 +0000 UTC m=+1547.936059498" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.910498 4750 scope.go:117] "RemoveContainer" containerID="99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.929633 4750 scope.go:117] "RemoveContainer" containerID="048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.943363 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-config-data\") pod \"fbb075c7-fcf6-49dd-a403-096fba79adef\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.943438 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-sg-core-conf-yaml\") pod \"fbb075c7-fcf6-49dd-a403-096fba79adef\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.943486 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-ceilometer-tls-certs\") pod \"fbb075c7-fcf6-49dd-a403-096fba79adef\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.943641 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-log-httpd\") pod \"fbb075c7-fcf6-49dd-a403-096fba79adef\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.943700 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-combined-ca-bundle\") pod \"fbb075c7-fcf6-49dd-a403-096fba79adef\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.943752 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8vzz\" (UniqueName: \"kubernetes.io/projected/fbb075c7-fcf6-49dd-a403-096fba79adef-kube-api-access-g8vzz\") pod \"fbb075c7-fcf6-49dd-a403-096fba79adef\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.943814 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-run-httpd\") pod \"fbb075c7-fcf6-49dd-a403-096fba79adef\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.943843 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-scripts\") pod \"fbb075c7-fcf6-49dd-a403-096fba79adef\" (UID: \"fbb075c7-fcf6-49dd-a403-096fba79adef\") " Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.944814 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fbb075c7-fcf6-49dd-a403-096fba79adef" (UID: "fbb075c7-fcf6-49dd-a403-096fba79adef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.945540 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fbb075c7-fcf6-49dd-a403-096fba79adef" (UID: "fbb075c7-fcf6-49dd-a403-096fba79adef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.952070 4750 scope.go:117] "RemoveContainer" containerID="862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.957220 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-scripts" (OuterVolumeSpecName: "scripts") pod "fbb075c7-fcf6-49dd-a403-096fba79adef" (UID: "fbb075c7-fcf6-49dd-a403-096fba79adef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.963464 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb075c7-fcf6-49dd-a403-096fba79adef-kube-api-access-g8vzz" (OuterVolumeSpecName: "kube-api-access-g8vzz") pod "fbb075c7-fcf6-49dd-a403-096fba79adef" (UID: "fbb075c7-fcf6-49dd-a403-096fba79adef"). InnerVolumeSpecName "kube-api-access-g8vzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:55 crc kubenswrapper[4750]: I0214 14:17:55.987061 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fbb075c7-fcf6-49dd-a403-096fba79adef" (UID: "fbb075c7-fcf6-49dd-a403-096fba79adef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.000945 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.002936 4750 scope.go:117] "RemoveContainer" containerID="5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028" Feb 14 14:17:56 crc kubenswrapper[4750]: E0214 14:17:56.004576 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028\": container with ID starting with 5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028 not found: ID does not exist" containerID="5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.004624 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028"} err="failed to get container status \"5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028\": rpc error: code = NotFound desc = could not find container \"5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028\": container with ID starting with 5b8028449e4342768e566340222de697c333306837f9a1212f0bc5de9e2ce028 not found: ID does not exist" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.004674 4750 scope.go:117] "RemoveContainer" containerID="99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1" Feb 14 14:17:56 crc kubenswrapper[4750]: E0214 14:17:56.005173 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1\": container with ID starting with 99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1 not found: ID does not exist" containerID="99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.005217 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1"} err="failed to get container status \"99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1\": rpc error: code = NotFound desc = could not find container \"99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1\": container with ID starting with 99a81075a5313e95be94231cb8cd4cdf114a70077087d6d80a4cc92e960850c1 not found: ID does not exist" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.005245 4750 scope.go:117] "RemoveContainer" containerID="048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4" Feb 14 14:17:56 crc kubenswrapper[4750]: E0214 14:17:56.005545 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4\": container with ID starting with 048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4 not found: ID does not exist" containerID="048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.005587 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4"} err="failed to get container status \"048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4\": rpc error: code = NotFound desc = could not find container \"048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4\": container with ID starting with 048a324008b1f80ad43881ee83a1c0e2ffdeefc641620452baf6f8f8470e2ef4 not found: ID does not exist" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.005619 4750 scope.go:117] "RemoveContainer" containerID="862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692" Feb 14 14:17:56 crc kubenswrapper[4750]: E0214 14:17:56.005937 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692\": container with ID starting with 862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692 not found: ID does not exist" containerID="862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.005974 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692"} err="failed to get container status \"862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692\": rpc error: code = NotFound desc = could not find container \"862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692\": container with ID starting with 862812b0a7e657ecbbe1afc0580c469c1d684975383ff517f7b0e72bf48a8692 not found: ID does not exist" Feb 14 14:17:56 crc kubenswrapper[4750]: W0214 14:17:56.008123 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63273a72_38f5_4186_8e5f_ac589414fbd6.slice/crio-f519cdcb9c51d12a1bcf715483103aa71fb0ac56fc26d623c2ee996c25ac4273 WatchSource:0}: Error finding container f519cdcb9c51d12a1bcf715483103aa71fb0ac56fc26d623c2ee996c25ac4273: Status 404 returned error can't find the container with id f519cdcb9c51d12a1bcf715483103aa71fb0ac56fc26d623c2ee996c25ac4273 Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.037975 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fbb075c7-fcf6-49dd-a403-096fba79adef" (UID: "fbb075c7-fcf6-49dd-a403-096fba79adef"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.046474 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.046511 4750 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.046526 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.046538 4750 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.046548 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8vzz\" (UniqueName: \"kubernetes.io/projected/fbb075c7-fcf6-49dd-a403-096fba79adef-kube-api-access-g8vzz\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.046560 4750 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb075c7-fcf6-49dd-a403-096fba79adef-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.047606 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbb075c7-fcf6-49dd-a403-096fba79adef" (UID: "fbb075c7-fcf6-49dd-a403-096fba79adef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.082351 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-config-data" (OuterVolumeSpecName: "config-data") pod "fbb075c7-fcf6-49dd-a403-096fba79adef" (UID: "fbb075c7-fcf6-49dd-a403-096fba79adef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.131414 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.150365 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.150399 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb075c7-fcf6-49dd-a403-096fba79adef-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.239545 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.278229 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.293890 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:56 crc kubenswrapper[4750]: E0214 14:17:56.294544 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="proxy-httpd" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.294574 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="proxy-httpd" Feb 14 14:17:56 crc kubenswrapper[4750]: E0214 14:17:56.294611 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="ceilometer-central-agent" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.294621 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="ceilometer-central-agent" Feb 14 14:17:56 crc kubenswrapper[4750]: E0214 14:17:56.294649 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="sg-core" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.294657 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="sg-core" Feb 14 14:17:56 crc kubenswrapper[4750]: E0214 14:17:56.294695 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="ceilometer-notification-agent" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.294703 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="ceilometer-notification-agent" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.295063 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="proxy-httpd" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.295090 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="ceilometer-central-agent" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.295105 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="sg-core" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.295144 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" containerName="ceilometer-notification-agent" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.297940 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.300430 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.301072 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.301465 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.307138 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.456631 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.457037 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-log-httpd\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.457094 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-scripts\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.457182 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-config-data\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.457213 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wrwh\" (UniqueName: \"kubernetes.io/projected/86882d90-da77-4ce1-b480-83e928449af9-kube-api-access-6wrwh\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.457274 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.457295 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-run-httpd\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.457331 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.560169 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-scripts\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.560303 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-config-data\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.560371 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wrwh\" (UniqueName: \"kubernetes.io/projected/86882d90-da77-4ce1-b480-83e928449af9-kube-api-access-6wrwh\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.560458 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.560493 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-run-httpd\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.560541 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.560693 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.560818 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-log-httpd\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.561336 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-run-httpd\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.561553 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-log-httpd\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.566483 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.566644 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.566699 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.567292 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-scripts\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.567900 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-config-data\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.588799 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wrwh\" (UniqueName: \"kubernetes.io/projected/86882d90-da77-4ce1-b480-83e928449af9-kube-api-access-6wrwh\") pod \"ceilometer-0\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.702728 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.772667 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1" path="/var/lib/kubelet/pods/29ea84b0-8bd9-44e7-b6e5-3c1f026c55c1/volumes" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.773644 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50" path="/var/lib/kubelet/pods/9ddabfbf-fc7d-4f4b-9ceb-6966ff14ea50/volumes" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.774269 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb075c7-fcf6-49dd-a403-096fba79adef" path="/var/lib/kubelet/pods/fbb075c7-fcf6-49dd-a403-096fba79adef/volumes" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.903441 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2f7209e1-923f-4507-8103-2020a196059f","Type":"ContainerStarted","Data":"76c1cf3a1f1229a2715377ac34f73a2e57de0fb4af5f574b7e03cd9d776b64fe"} Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.903752 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2f7209e1-923f-4507-8103-2020a196059f","Type":"ContainerStarted","Data":"c7e0a2697b2ad90ab4a4ccfe9a37d8a312cf2d29cf2399e1fbb21c75d2671f93"} Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.907052 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerStarted","Data":"953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71"} Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.907084 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerStarted","Data":"f519cdcb9c51d12a1bcf715483103aa71fb0ac56fc26d623c2ee996c25ac4273"} Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.928031 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.9280118 podStartE2EDuration="2.9280118s" podCreationTimestamp="2026-02-14 14:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:56.918279033 +0000 UTC m=+1548.944268544" watchObservedRunningTime="2026-02-14 14:17:56.9280118 +0000 UTC m=+1548.954001281" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.960991 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": read tcp 10.217.0.2:60402->10.217.1.2:8775: read: connection reset by peer" Feb 14 14:17:56 crc kubenswrapper[4750]: I0214 14:17:56.961157 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": read tcp 10.217.0.2:60390->10.217.1.2:8775: read: connection reset by peer" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.233952 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.573551 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.690322 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44e7a863-8276-4c13-bbd2-8adad31ad437-logs\") pod \"44e7a863-8276-4c13-bbd2-8adad31ad437\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.690372 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-config-data\") pod \"44e7a863-8276-4c13-bbd2-8adad31ad437\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.690539 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-nova-metadata-tls-certs\") pod \"44e7a863-8276-4c13-bbd2-8adad31ad437\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.690590 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-combined-ca-bundle\") pod \"44e7a863-8276-4c13-bbd2-8adad31ad437\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.690685 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5ks8\" (UniqueName: \"kubernetes.io/projected/44e7a863-8276-4c13-bbd2-8adad31ad437-kube-api-access-g5ks8\") pod \"44e7a863-8276-4c13-bbd2-8adad31ad437\" (UID: \"44e7a863-8276-4c13-bbd2-8adad31ad437\") " Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.692846 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e7a863-8276-4c13-bbd2-8adad31ad437-logs" (OuterVolumeSpecName: "logs") pod "44e7a863-8276-4c13-bbd2-8adad31ad437" (UID: "44e7a863-8276-4c13-bbd2-8adad31ad437"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.695561 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e7a863-8276-4c13-bbd2-8adad31ad437-kube-api-access-g5ks8" (OuterVolumeSpecName: "kube-api-access-g5ks8") pod "44e7a863-8276-4c13-bbd2-8adad31ad437" (UID: "44e7a863-8276-4c13-bbd2-8adad31ad437"). InnerVolumeSpecName "kube-api-access-g5ks8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.723859 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-config-data" (OuterVolumeSpecName: "config-data") pod "44e7a863-8276-4c13-bbd2-8adad31ad437" (UID: "44e7a863-8276-4c13-bbd2-8adad31ad437"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.773328 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "44e7a863-8276-4c13-bbd2-8adad31ad437" (UID: "44e7a863-8276-4c13-bbd2-8adad31ad437"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.794606 4750 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44e7a863-8276-4c13-bbd2-8adad31ad437-logs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.794636 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.794645 4750 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.794654 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5ks8\" (UniqueName: \"kubernetes.io/projected/44e7a863-8276-4c13-bbd2-8adad31ad437-kube-api-access-g5ks8\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.820621 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44e7a863-8276-4c13-bbd2-8adad31ad437" (UID: "44e7a863-8276-4c13-bbd2-8adad31ad437"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.902829 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7a863-8276-4c13-bbd2-8adad31ad437-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.931225 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerStarted","Data":"11473f1350fac51fe7a49828cf4c483973924b234b2df48e876dd84e4106836a"} Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.937489 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerStarted","Data":"c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb"} Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.941568 4750 generic.go:334] "Generic (PLEG): container finished" podID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerID="56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c" exitCode=0 Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.942543 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.943338 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44e7a863-8276-4c13-bbd2-8adad31ad437","Type":"ContainerDied","Data":"56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c"} Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.943444 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44e7a863-8276-4c13-bbd2-8adad31ad437","Type":"ContainerDied","Data":"6ec3a70e80d1f059c9c14ff4e61386a8850ce8f675e76559536193002c38fd41"} Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.943520 4750 scope.go:117] "RemoveContainer" containerID="56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c" Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.981404 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.992253 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:57 crc kubenswrapper[4750]: I0214 14:17:57.993662 4750 scope.go:117] "RemoveContainer" containerID="3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.019168 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:58 crc kubenswrapper[4750]: E0214 14:17:58.019695 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-metadata" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.019711 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-metadata" Feb 14 14:17:58 crc kubenswrapper[4750]: E0214 14:17:58.019754 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-log" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.019760 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-log" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.019977 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-log" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.020004 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" containerName="nova-metadata-metadata" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.021246 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.030963 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.057926 4750 scope.go:117] "RemoveContainer" containerID="56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.058737 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.058873 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 14:17:58 crc kubenswrapper[4750]: E0214 14:17:58.068330 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c\": container with ID starting with 56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c not found: ID does not exist" containerID="56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.068577 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c"} err="failed to get container status \"56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c\": rpc error: code = NotFound desc = could not find container \"56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c\": container with ID starting with 56d02699313110ff6c05dba2a78ea08396918dbcd9a98cb4cc302cc41eabff6c not found: ID does not exist" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.068659 4750 scope.go:117] "RemoveContainer" containerID="3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d" Feb 14 14:17:58 crc kubenswrapper[4750]: E0214 14:17:58.072242 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d\": container with ID starting with 3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d not found: ID does not exist" containerID="3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.072382 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d"} err="failed to get container status \"3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d\": rpc error: code = NotFound desc = could not find container \"3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d\": container with ID starting with 3df89de7369610d202a5ab81ac5c7a811242a635b2e19db8fc8ba6276a95fc8d not found: ID does not exist" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.114590 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bb3a-9e74-4713-aedc-254b6671326d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.114651 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bb3a-9e74-4713-aedc-254b6671326d-config-data\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.114704 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b53bb3a-9e74-4713-aedc-254b6671326d-logs\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.114732 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nktg\" (UniqueName: \"kubernetes.io/projected/2b53bb3a-9e74-4713-aedc-254b6671326d-kube-api-access-7nktg\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.114779 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b53bb3a-9e74-4713-aedc-254b6671326d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.216375 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bb3a-9e74-4713-aedc-254b6671326d-config-data\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.216456 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b53bb3a-9e74-4713-aedc-254b6671326d-logs\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.216496 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nktg\" (UniqueName: \"kubernetes.io/projected/2b53bb3a-9e74-4713-aedc-254b6671326d-kube-api-access-7nktg\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.216572 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b53bb3a-9e74-4713-aedc-254b6671326d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.216747 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bb3a-9e74-4713-aedc-254b6671326d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.217641 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b53bb3a-9e74-4713-aedc-254b6671326d-logs\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.220911 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bb3a-9e74-4713-aedc-254b6671326d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.221601 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b53bb3a-9e74-4713-aedc-254b6671326d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.221838 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bb3a-9e74-4713-aedc-254b6671326d-config-data\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.234775 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nktg\" (UniqueName: \"kubernetes.io/projected/2b53bb3a-9e74-4713-aedc-254b6671326d-kube-api-access-7nktg\") pod \"nova-metadata-0\" (UID: \"2b53bb3a-9e74-4713-aedc-254b6671326d\") " pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.433587 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.764393 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e7a863-8276-4c13-bbd2-8adad31ad437" path="/var/lib/kubelet/pods/44e7a863-8276-4c13-bbd2-8adad31ad437/volumes" Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.953592 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerStarted","Data":"2e6f1e0f3f2a8ad56e4497ec938ecbf826bd9379ac775566d7f5e0fb32cc6664"} Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.953633 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerStarted","Data":"497280f58b025492f0e1424f1d48d616c6f081e2f4a6a0e8f203ad1a59042884"} Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.956999 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerStarted","Data":"455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88"} Feb 14 14:17:58 crc kubenswrapper[4750]: I0214 14:17:58.957051 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerStarted","Data":"9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940"} Feb 14 14:17:58 crc kubenswrapper[4750]: W0214 14:17:58.994359 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b53bb3a_9e74_4713_aedc_254b6671326d.slice/crio-e037c194f12bf6dfeb4c80d87414e62d53e033b8d6eb6756144b1d97f8d17742 WatchSource:0}: Error finding container e037c194f12bf6dfeb4c80d87414e62d53e033b8d6eb6756144b1d97f8d17742: Status 404 returned error can't find the container with id e037c194f12bf6dfeb4c80d87414e62d53e033b8d6eb6756144b1d97f8d17742 Feb 14 14:17:59 crc kubenswrapper[4750]: I0214 14:17:59.017519 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.468196053 podStartE2EDuration="5.017501872s" podCreationTimestamp="2026-02-14 14:17:54 +0000 UTC" firstStartedPulling="2026-02-14 14:17:56.011454183 +0000 UTC m=+1548.037443664" lastFinishedPulling="2026-02-14 14:17:58.560759982 +0000 UTC m=+1550.586749483" observedRunningTime="2026-02-14 14:17:58.977304188 +0000 UTC m=+1551.003293699" watchObservedRunningTime="2026-02-14 14:17:59.017501872 +0000 UTC m=+1551.043491353" Feb 14 14:17:59 crc kubenswrapper[4750]: I0214 14:17:59.056804 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 14:17:59 crc kubenswrapper[4750]: I0214 14:17:59.972618 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2b53bb3a-9e74-4713-aedc-254b6671326d","Type":"ContainerStarted","Data":"c1ce35ef0892be049168c29dfe9a0e6fafc1c96bb2a44eded26fed9d2174ab18"} Feb 14 14:17:59 crc kubenswrapper[4750]: I0214 14:17:59.972900 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2b53bb3a-9e74-4713-aedc-254b6671326d","Type":"ContainerStarted","Data":"60089bd606c4e97da2948af90c0864d4b7eceab6bb5bb2f8e50d41397f3673c5"} Feb 14 14:17:59 crc kubenswrapper[4750]: I0214 14:17:59.972918 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2b53bb3a-9e74-4713-aedc-254b6671326d","Type":"ContainerStarted","Data":"e037c194f12bf6dfeb4c80d87414e62d53e033b8d6eb6756144b1d97f8d17742"} Feb 14 14:17:59 crc kubenswrapper[4750]: I0214 14:17:59.975656 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerStarted","Data":"c4b89d43b38c807db07d94da50d431bf46089d0d3b65f377a472e1b0d7ec0f96"} Feb 14 14:17:59 crc kubenswrapper[4750]: I0214 14:17:59.998601 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9985828359999998 podStartE2EDuration="2.998582836s" podCreationTimestamp="2026-02-14 14:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:17:59.991792133 +0000 UTC m=+1552.017781614" watchObservedRunningTime="2026-02-14 14:17:59.998582836 +0000 UTC m=+1552.024572317" Feb 14 14:18:00 crc kubenswrapper[4750]: I0214 14:18:00.129061 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:18:00 crc kubenswrapper[4750]: I0214 14:18:00.129792 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:18:00 crc kubenswrapper[4750]: I0214 14:18:00.474880 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 14:18:02 crc kubenswrapper[4750]: I0214 14:18:02.003693 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerStarted","Data":"349c9ee5d7ec1a435f1cc10c019219832d7fd6175ee2d77b47a7eb4a8fd77898"} Feb 14 14:18:02 crc kubenswrapper[4750]: I0214 14:18:02.004221 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 14:18:02 crc kubenswrapper[4750]: I0214 14:18:02.041872 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.352201395 podStartE2EDuration="6.041850162s" podCreationTimestamp="2026-02-14 14:17:56 +0000 UTC" firstStartedPulling="2026-02-14 14:17:57.265712062 +0000 UTC m=+1549.291701543" lastFinishedPulling="2026-02-14 14:18:00.955360789 +0000 UTC m=+1552.981350310" observedRunningTime="2026-02-14 14:18:02.023613873 +0000 UTC m=+1554.049603374" watchObservedRunningTime="2026-02-14 14:18:02.041850162 +0000 UTC m=+1554.067839643" Feb 14 14:18:03 crc kubenswrapper[4750]: I0214 14:18:03.434000 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 14:18:03 crc kubenswrapper[4750]: I0214 14:18:03.434378 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 14:18:04 crc kubenswrapper[4750]: I0214 14:18:04.270633 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 14:18:04 crc kubenswrapper[4750]: I0214 14:18:04.270705 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 14:18:05 crc kubenswrapper[4750]: I0214 14:18:05.287387 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8ed4f6ec-3953-48de-a051-2af04cdafeb4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.6:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 14:18:05 crc kubenswrapper[4750]: I0214 14:18:05.287380 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8ed4f6ec-3953-48de-a051-2af04cdafeb4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.6:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 14:18:05 crc kubenswrapper[4750]: I0214 14:18:05.475585 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 14:18:05 crc kubenswrapper[4750]: I0214 14:18:05.535162 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 14:18:06 crc kubenswrapper[4750]: I0214 14:18:06.091556 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 14:18:08 crc kubenswrapper[4750]: I0214 14:18:08.434251 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 14:18:08 crc kubenswrapper[4750]: I0214 14:18:08.434601 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 14:18:09 crc kubenswrapper[4750]: I0214 14:18:09.451364 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2b53bb3a-9e74-4713-aedc-254b6671326d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 14:18:09 crc kubenswrapper[4750]: I0214 14:18:09.451423 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2b53bb3a-9e74-4713-aedc-254b6671326d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 14:18:14 crc kubenswrapper[4750]: I0214 14:18:14.277841 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 14:18:14 crc kubenswrapper[4750]: I0214 14:18:14.278953 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 14:18:14 crc kubenswrapper[4750]: I0214 14:18:14.279296 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 14:18:14 crc kubenswrapper[4750]: I0214 14:18:14.285294 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 14:18:15 crc kubenswrapper[4750]: I0214 14:18:15.157952 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 14:18:15 crc kubenswrapper[4750]: I0214 14:18:15.165550 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 14:18:18 crc kubenswrapper[4750]: I0214 14:18:18.444907 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 14:18:18 crc kubenswrapper[4750]: I0214 14:18:18.445711 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 14:18:18 crc kubenswrapper[4750]: I0214 14:18:18.451596 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 14:18:18 crc kubenswrapper[4750]: I0214 14:18:18.451735 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.334452 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7b8k8"] Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.338453 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.353899 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7b8k8"] Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.400931 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-utilities\") pod \"community-operators-7b8k8\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.401559 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-catalog-content\") pod \"community-operators-7b8k8\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.402015 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5s2s\" (UniqueName: \"kubernetes.io/projected/5a568720-47a5-4420-b6f4-3c52a348e90b-kube-api-access-w5s2s\") pod \"community-operators-7b8k8\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.504938 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-utilities\") pod \"community-operators-7b8k8\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.505318 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-catalog-content\") pod \"community-operators-7b8k8\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.505610 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5s2s\" (UniqueName: \"kubernetes.io/projected/5a568720-47a5-4420-b6f4-3c52a348e90b-kube-api-access-w5s2s\") pod \"community-operators-7b8k8\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.506058 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-utilities\") pod \"community-operators-7b8k8\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.506205 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-catalog-content\") pod \"community-operators-7b8k8\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.546165 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5s2s\" (UniqueName: \"kubernetes.io/projected/5a568720-47a5-4420-b6f4-3c52a348e90b-kube-api-access-w5s2s\") pod \"community-operators-7b8k8\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:19 crc kubenswrapper[4750]: I0214 14:18:19.687852 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:20 crc kubenswrapper[4750]: I0214 14:18:20.165481 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7b8k8"] Feb 14 14:18:20 crc kubenswrapper[4750]: W0214 14:18:20.169026 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a568720_47a5_4420_b6f4_3c52a348e90b.slice/crio-4b6bf3ea63f8b34639dda9a30b834886b8171911c02cf9b5e05c7ea3bbda01d9 WatchSource:0}: Error finding container 4b6bf3ea63f8b34639dda9a30b834886b8171911c02cf9b5e05c7ea3bbda01d9: Status 404 returned error can't find the container with id 4b6bf3ea63f8b34639dda9a30b834886b8171911c02cf9b5e05c7ea3bbda01d9 Feb 14 14:18:20 crc kubenswrapper[4750]: I0214 14:18:20.235014 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b8k8" event={"ID":"5a568720-47a5-4420-b6f4-3c52a348e90b","Type":"ContainerStarted","Data":"4b6bf3ea63f8b34639dda9a30b834886b8171911c02cf9b5e05c7ea3bbda01d9"} Feb 14 14:18:21 crc kubenswrapper[4750]: I0214 14:18:21.245886 4750 generic.go:334] "Generic (PLEG): container finished" podID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerID="b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1" exitCode=0 Feb 14 14:18:21 crc kubenswrapper[4750]: I0214 14:18:21.246092 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b8k8" event={"ID":"5a568720-47a5-4420-b6f4-3c52a348e90b","Type":"ContainerDied","Data":"b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1"} Feb 14 14:18:22 crc kubenswrapper[4750]: I0214 14:18:22.265014 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b8k8" event={"ID":"5a568720-47a5-4420-b6f4-3c52a348e90b","Type":"ContainerStarted","Data":"16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a"} Feb 14 14:18:24 crc kubenswrapper[4750]: I0214 14:18:24.296816 4750 generic.go:334] "Generic (PLEG): container finished" podID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerID="16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a" exitCode=0 Feb 14 14:18:24 crc kubenswrapper[4750]: I0214 14:18:24.297375 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b8k8" event={"ID":"5a568720-47a5-4420-b6f4-3c52a348e90b","Type":"ContainerDied","Data":"16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a"} Feb 14 14:18:25 crc kubenswrapper[4750]: I0214 14:18:25.310581 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b8k8" event={"ID":"5a568720-47a5-4420-b6f4-3c52a348e90b","Type":"ContainerStarted","Data":"df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c"} Feb 14 14:18:25 crc kubenswrapper[4750]: I0214 14:18:25.342662 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7b8k8" podStartSLOduration=2.901287619 podStartE2EDuration="6.342638063s" podCreationTimestamp="2026-02-14 14:18:19 +0000 UTC" firstStartedPulling="2026-02-14 14:18:21.24746853 +0000 UTC m=+1573.273458011" lastFinishedPulling="2026-02-14 14:18:24.688818974 +0000 UTC m=+1576.714808455" observedRunningTime="2026-02-14 14:18:25.327536253 +0000 UTC m=+1577.353525764" watchObservedRunningTime="2026-02-14 14:18:25.342638063 +0000 UTC m=+1577.368627564" Feb 14 14:18:26 crc kubenswrapper[4750]: I0214 14:18:26.720678 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 14:18:29 crc kubenswrapper[4750]: I0214 14:18:29.688553 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:29 crc kubenswrapper[4750]: I0214 14:18:29.690624 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.131366 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.131705 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.131754 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.132553 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.132624 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" gracePeriod=600 Feb 14 14:18:30 crc kubenswrapper[4750]: E0214 14:18:30.289927 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.367414 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" exitCode=0 Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.367706 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4"} Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.367837 4750 scope.go:117] "RemoveContainer" containerID="3911c83e66f85b48593d6e540ca28f5e1b07698970a4746fbe1e53b6a3e79ffd" Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.368715 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:18:30 crc kubenswrapper[4750]: E0214 14:18:30.369105 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:18:30 crc kubenswrapper[4750]: I0214 14:18:30.754572 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7b8k8" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerName="registry-server" probeResult="failure" output=< Feb 14 14:18:30 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:18:30 crc kubenswrapper[4750]: > Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.134583 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-gdzkt"] Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.144588 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-gdzkt"] Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.244575 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-zrqpc"] Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.246127 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.257861 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zrqpc"] Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.379727 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-combined-ca-bundle\") pod \"heat-db-sync-zrqpc\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.380168 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-config-data\") pod \"heat-db-sync-zrqpc\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.380449 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr8zq\" (UniqueName: \"kubernetes.io/projected/15aeb180-f3b4-46a0-847c-5befbf340b51-kube-api-access-sr8zq\") pod \"heat-db-sync-zrqpc\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.482695 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-combined-ca-bundle\") pod \"heat-db-sync-zrqpc\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.482819 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-config-data\") pod \"heat-db-sync-zrqpc\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.482902 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr8zq\" (UniqueName: \"kubernetes.io/projected/15aeb180-f3b4-46a0-847c-5befbf340b51-kube-api-access-sr8zq\") pod \"heat-db-sync-zrqpc\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.489460 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-combined-ca-bundle\") pod \"heat-db-sync-zrqpc\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.490292 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-config-data\") pod \"heat-db-sync-zrqpc\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.500320 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr8zq\" (UniqueName: \"kubernetes.io/projected/15aeb180-f3b4-46a0-847c-5befbf340b51-kube-api-access-sr8zq\") pod \"heat-db-sync-zrqpc\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.593570 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zrqpc" Feb 14 14:18:38 crc kubenswrapper[4750]: I0214 14:18:38.787944 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b46a12b-a34f-4850-a1b8-a764ba798764" path="/var/lib/kubelet/pods/2b46a12b-a34f-4850-a1b8-a764ba798764/volumes" Feb 14 14:18:39 crc kubenswrapper[4750]: I0214 14:18:39.175853 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zrqpc"] Feb 14 14:18:39 crc kubenswrapper[4750]: I0214 14:18:39.178228 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:18:39 crc kubenswrapper[4750]: I0214 14:18:39.518041 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zrqpc" event={"ID":"15aeb180-f3b4-46a0-847c-5befbf340b51","Type":"ContainerStarted","Data":"703305b59980b010dad01a07ab0d325c9ec6d034b3266a271b409d391fc82a05"} Feb 14 14:18:39 crc kubenswrapper[4750]: I0214 14:18:39.754313 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:39 crc kubenswrapper[4750]: I0214 14:18:39.810317 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:39 crc kubenswrapper[4750]: I0214 14:18:39.998931 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7b8k8"] Feb 14 14:18:40 crc kubenswrapper[4750]: I0214 14:18:40.317625 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 14:18:40 crc kubenswrapper[4750]: I0214 14:18:40.359058 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:18:40 crc kubenswrapper[4750]: I0214 14:18:40.361604 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="ceilometer-central-agent" containerID="cri-o://2e6f1e0f3f2a8ad56e4497ec938ecbf826bd9379ac775566d7f5e0fb32cc6664" gracePeriod=30 Feb 14 14:18:40 crc kubenswrapper[4750]: I0214 14:18:40.362163 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="proxy-httpd" containerID="cri-o://349c9ee5d7ec1a435f1cc10c019219832d7fd6175ee2d77b47a7eb4a8fd77898" gracePeriod=30 Feb 14 14:18:40 crc kubenswrapper[4750]: I0214 14:18:40.362232 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="sg-core" containerID="cri-o://c4b89d43b38c807db07d94da50d431bf46089d0d3b65f377a472e1b0d7ec0f96" gracePeriod=30 Feb 14 14:18:40 crc kubenswrapper[4750]: I0214 14:18:40.362294 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="ceilometer-notification-agent" containerID="cri-o://497280f58b025492f0e1424f1d48d616c6f081e2f4a6a0e8f203ad1a59042884" gracePeriod=30 Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.319456 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.572963 4750 generic.go:334] "Generic (PLEG): container finished" podID="86882d90-da77-4ce1-b480-83e928449af9" containerID="349c9ee5d7ec1a435f1cc10c019219832d7fd6175ee2d77b47a7eb4a8fd77898" exitCode=0 Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.572993 4750 generic.go:334] "Generic (PLEG): container finished" podID="86882d90-da77-4ce1-b480-83e928449af9" containerID="c4b89d43b38c807db07d94da50d431bf46089d0d3b65f377a472e1b0d7ec0f96" exitCode=2 Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.573002 4750 generic.go:334] "Generic (PLEG): container finished" podID="86882d90-da77-4ce1-b480-83e928449af9" containerID="497280f58b025492f0e1424f1d48d616c6f081e2f4a6a0e8f203ad1a59042884" exitCode=0 Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.573010 4750 generic.go:334] "Generic (PLEG): container finished" podID="86882d90-da77-4ce1-b480-83e928449af9" containerID="2e6f1e0f3f2a8ad56e4497ec938ecbf826bd9379ac775566d7f5e0fb32cc6664" exitCode=0 Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.573849 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerDied","Data":"349c9ee5d7ec1a435f1cc10c019219832d7fd6175ee2d77b47a7eb4a8fd77898"} Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.573895 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerDied","Data":"c4b89d43b38c807db07d94da50d431bf46089d0d3b65f377a472e1b0d7ec0f96"} Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.573907 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerDied","Data":"497280f58b025492f0e1424f1d48d616c6f081e2f4a6a0e8f203ad1a59042884"} Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.573916 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerDied","Data":"2e6f1e0f3f2a8ad56e4497ec938ecbf826bd9379ac775566d7f5e0fb32cc6664"} Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.573946 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7b8k8" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerName="registry-server" containerID="cri-o://df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c" gracePeriod=2 Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.745602 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:18:41 crc kubenswrapper[4750]: E0214 14:18:41.745889 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:18:41 crc kubenswrapper[4750]: I0214 14:18:41.990873 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.178325 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-log-httpd\") pod \"86882d90-da77-4ce1-b480-83e928449af9\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.178393 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-scripts\") pod \"86882d90-da77-4ce1-b480-83e928449af9\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.178434 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-config-data\") pod \"86882d90-da77-4ce1-b480-83e928449af9\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.178454 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wrwh\" (UniqueName: \"kubernetes.io/projected/86882d90-da77-4ce1-b480-83e928449af9-kube-api-access-6wrwh\") pod \"86882d90-da77-4ce1-b480-83e928449af9\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.178560 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-run-httpd\") pod \"86882d90-da77-4ce1-b480-83e928449af9\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.178655 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-ceilometer-tls-certs\") pod \"86882d90-da77-4ce1-b480-83e928449af9\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.178680 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-sg-core-conf-yaml\") pod \"86882d90-da77-4ce1-b480-83e928449af9\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.178783 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-combined-ca-bundle\") pod \"86882d90-da77-4ce1-b480-83e928449af9\" (UID: \"86882d90-da77-4ce1-b480-83e928449af9\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.197387 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "86882d90-da77-4ce1-b480-83e928449af9" (UID: "86882d90-da77-4ce1-b480-83e928449af9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.199401 4750 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.199604 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "86882d90-da77-4ce1-b480-83e928449af9" (UID: "86882d90-da77-4ce1-b480-83e928449af9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.271600 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-scripts" (OuterVolumeSpecName: "scripts") pod "86882d90-da77-4ce1-b480-83e928449af9" (UID: "86882d90-da77-4ce1-b480-83e928449af9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.271737 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86882d90-da77-4ce1-b480-83e928449af9-kube-api-access-6wrwh" (OuterVolumeSpecName: "kube-api-access-6wrwh") pod "86882d90-da77-4ce1-b480-83e928449af9" (UID: "86882d90-da77-4ce1-b480-83e928449af9"). InnerVolumeSpecName "kube-api-access-6wrwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.301786 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.301818 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wrwh\" (UniqueName: \"kubernetes.io/projected/86882d90-da77-4ce1-b480-83e928449af9-kube-api-access-6wrwh\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.301881 4750 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86882d90-da77-4ce1-b480-83e928449af9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.311554 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "86882d90-da77-4ce1-b480-83e928449af9" (UID: "86882d90-da77-4ce1-b480-83e928449af9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.371473 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86882d90-da77-4ce1-b480-83e928449af9" (UID: "86882d90-da77-4ce1-b480-83e928449af9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.380626 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.404990 4750 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.405027 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.481382 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "86882d90-da77-4ce1-b480-83e928449af9" (UID: "86882d90-da77-4ce1-b480-83e928449af9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.507092 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-utilities\") pod \"5a568720-47a5-4420-b6f4-3c52a348e90b\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.507551 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-utilities" (OuterVolumeSpecName: "utilities") pod "5a568720-47a5-4420-b6f4-3c52a348e90b" (UID: "5a568720-47a5-4420-b6f4-3c52a348e90b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.508164 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-catalog-content\") pod \"5a568720-47a5-4420-b6f4-3c52a348e90b\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.508227 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5s2s\" (UniqueName: \"kubernetes.io/projected/5a568720-47a5-4420-b6f4-3c52a348e90b-kube-api-access-w5s2s\") pod \"5a568720-47a5-4420-b6f4-3c52a348e90b\" (UID: \"5a568720-47a5-4420-b6f4-3c52a348e90b\") " Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.509321 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.509353 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.509452 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-config-data" (OuterVolumeSpecName: "config-data") pod "86882d90-da77-4ce1-b480-83e928449af9" (UID: "86882d90-da77-4ce1-b480-83e928449af9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.515394 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a568720-47a5-4420-b6f4-3c52a348e90b-kube-api-access-w5s2s" (OuterVolumeSpecName: "kube-api-access-w5s2s") pod "5a568720-47a5-4420-b6f4-3c52a348e90b" (UID: "5a568720-47a5-4420-b6f4-3c52a348e90b"). InnerVolumeSpecName "kube-api-access-w5s2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.580206 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a568720-47a5-4420-b6f4-3c52a348e90b" (UID: "5a568720-47a5-4420-b6f4-3c52a348e90b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.589839 4750 generic.go:334] "Generic (PLEG): container finished" podID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerID="df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c" exitCode=0 Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.589939 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b8k8" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.589942 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b8k8" event={"ID":"5a568720-47a5-4420-b6f4-3c52a348e90b","Type":"ContainerDied","Data":"df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c"} Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.590107 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b8k8" event={"ID":"5a568720-47a5-4420-b6f4-3c52a348e90b","Type":"ContainerDied","Data":"4b6bf3ea63f8b34639dda9a30b834886b8171911c02cf9b5e05c7ea3bbda01d9"} Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.590166 4750 scope.go:117] "RemoveContainer" containerID="df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.593692 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86882d90-da77-4ce1-b480-83e928449af9","Type":"ContainerDied","Data":"11473f1350fac51fe7a49828cf4c483973924b234b2df48e876dd84e4106836a"} Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.593756 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.613817 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86882d90-da77-4ce1-b480-83e928449af9-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.613846 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a568720-47a5-4420-b6f4-3c52a348e90b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.613860 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5s2s\" (UniqueName: \"kubernetes.io/projected/5a568720-47a5-4420-b6f4-3c52a348e90b-kube-api-access-w5s2s\") on node \"crc\" DevicePath \"\"" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.639025 4750 scope.go:117] "RemoveContainer" containerID="16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.642465 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.660515 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.674035 4750 scope.go:117] "RemoveContainer" containerID="b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.677863 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7b8k8"] Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.710858 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7b8k8"] Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.736186 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.736741 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="ceilometer-central-agent" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.736758 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="ceilometer-central-agent" Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.736771 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerName="extract-utilities" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.736777 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerName="extract-utilities" Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.736789 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="ceilometer-notification-agent" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.736795 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="ceilometer-notification-agent" Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.736810 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="proxy-httpd" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.736816 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="proxy-httpd" Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.736829 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="sg-core" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.736836 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="sg-core" Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.736846 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerName="extract-content" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.736852 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerName="extract-content" Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.736878 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerName="registry-server" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.736885 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerName="registry-server" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.737087 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="proxy-httpd" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.737102 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" containerName="registry-server" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.737164 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="ceilometer-notification-agent" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.737177 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="sg-core" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.737195 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="86882d90-da77-4ce1-b480-83e928449af9" containerName="ceilometer-central-agent" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.739317 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.740473 4750 scope.go:117] "RemoveContainer" containerID="df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.744287 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.744493 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.745164 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c\": container with ID starting with df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c not found: ID does not exist" containerID="df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.745198 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c"} err="failed to get container status \"df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c\": rpc error: code = NotFound desc = could not find container \"df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c\": container with ID starting with df7406963ffecff9a1f880841fb0d7e1140e832166d4c4fdc3310f8f5bbadb5c not found: ID does not exist" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.745230 4750 scope.go:117] "RemoveContainer" containerID="16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.745388 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.750950 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a\": container with ID starting with 16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a not found: ID does not exist" containerID="16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.750994 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a"} err="failed to get container status \"16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a\": rpc error: code = NotFound desc = could not find container \"16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a\": container with ID starting with 16dd8bb3ec9f379082d7453ebb49a5b8a3efdd40e412544503731150d1b21a4a not found: ID does not exist" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.751022 4750 scope.go:117] "RemoveContainer" containerID="b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1" Feb 14 14:18:42 crc kubenswrapper[4750]: E0214 14:18:42.756023 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1\": container with ID starting with b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1 not found: ID does not exist" containerID="b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.756055 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1"} err="failed to get container status \"b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1\": rpc error: code = NotFound desc = could not find container \"b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1\": container with ID starting with b9329e0b3eed37e137e277edc64061ad333ec5d5bc00df6b4de87340ec7de9a1 not found: ID does not exist" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.756078 4750 scope.go:117] "RemoveContainer" containerID="349c9ee5d7ec1a435f1cc10c019219832d7fd6175ee2d77b47a7eb4a8fd77898" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.774800 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a568720-47a5-4420-b6f4-3c52a348e90b" path="/var/lib/kubelet/pods/5a568720-47a5-4420-b6f4-3c52a348e90b/volumes" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.786981 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86882d90-da77-4ce1-b480-83e928449af9" path="/var/lib/kubelet/pods/86882d90-da77-4ce1-b480-83e928449af9/volumes" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.792226 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.871655 4750 scope.go:117] "RemoveContainer" containerID="c4b89d43b38c807db07d94da50d431bf46089d0d3b65f377a472e1b0d7ec0f96" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.921148 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-scripts\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.921227 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.921282 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-run-httpd\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.921367 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.921390 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-config-data\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.921425 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44bq\" (UniqueName: \"kubernetes.io/projected/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-kube-api-access-x44bq\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.921444 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-log-httpd\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.921549 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.926956 4750 scope.go:117] "RemoveContainer" containerID="497280f58b025492f0e1424f1d48d616c6f081e2f4a6a0e8f203ad1a59042884" Feb 14 14:18:42 crc kubenswrapper[4750]: I0214 14:18:42.977849 4750 scope.go:117] "RemoveContainer" containerID="2e6f1e0f3f2a8ad56e4497ec938ecbf826bd9379ac775566d7f5e0fb32cc6664" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.023921 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.024291 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-run-httpd\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.024379 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.024420 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-config-data\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.024478 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44bq\" (UniqueName: \"kubernetes.io/projected/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-kube-api-access-x44bq\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.024499 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-log-httpd\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.024517 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.024591 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-scripts\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.026776 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-run-httpd\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.028565 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-log-httpd\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.032311 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.034185 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-scripts\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.034899 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-config-data\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.035310 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.047043 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.051683 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44bq\" (UniqueName: \"kubernetes.io/projected/b67f7611-5ec3-4e68-86cf-52b26c4e3b1f-kube-api-access-x44bq\") pod \"ceilometer-0\" (UID: \"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f\") " pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.172096 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 14:18:43 crc kubenswrapper[4750]: I0214 14:18:43.924801 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 14:18:44 crc kubenswrapper[4750]: I0214 14:18:44.674009 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f","Type":"ContainerStarted","Data":"d1233d5ed1b904e7d3a49ce08c9bb144ffd616dab270a13e66c0c161f8d3ea17"} Feb 14 14:18:45 crc kubenswrapper[4750]: I0214 14:18:45.314264 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerName="rabbitmq" containerID="cri-o://e3dfaeac3877491d9a6ba7973adcdbda21b465a624a27b5515e9052b76c9e5d2" gracePeriod=604796 Feb 14 14:18:45 crc kubenswrapper[4750]: I0214 14:18:45.719243 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9ccd0d36-6fed-4aeb-b811-28cf48001750" containerName="rabbitmq" containerID="cri-o://5d9735d0b4e625dd0eb8a68b36061a57a57676754559f0a692ae6637d69260f0" gracePeriod=604796 Feb 14 14:18:51 crc kubenswrapper[4750]: I0214 14:18:51.791453 4750 generic.go:334] "Generic (PLEG): container finished" podID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerID="e3dfaeac3877491d9a6ba7973adcdbda21b465a624a27b5515e9052b76c9e5d2" exitCode=0 Feb 14 14:18:51 crc kubenswrapper[4750]: I0214 14:18:51.791525 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f513b3ee-aa21-48f3-b5fa-395f0557292a","Type":"ContainerDied","Data":"e3dfaeac3877491d9a6ba7973adcdbda21b465a624a27b5515e9052b76c9e5d2"} Feb 14 14:18:52 crc kubenswrapper[4750]: I0214 14:18:52.743069 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:18:52 crc kubenswrapper[4750]: E0214 14:18:52.743661 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:18:52 crc kubenswrapper[4750]: I0214 14:18:52.840536 4750 generic.go:334] "Generic (PLEG): container finished" podID="9ccd0d36-6fed-4aeb-b811-28cf48001750" containerID="5d9735d0b4e625dd0eb8a68b36061a57a57676754559f0a692ae6637d69260f0" exitCode=0 Feb 14 14:18:52 crc kubenswrapper[4750]: I0214 14:18:52.840592 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9ccd0d36-6fed-4aeb-b811-28cf48001750","Type":"ContainerDied","Data":"5d9735d0b4e625dd0eb8a68b36061a57a57676754559f0a692ae6637d69260f0"} Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.092466 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nk56s"] Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.094956 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.097869 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.109690 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nk56s"] Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.165886 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.166183 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.166311 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.166369 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-824hd\" (UniqueName: \"kubernetes.io/projected/ab307495-25f9-4094-81f6-eea4e1824e09-kube-api-access-824hd\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.166502 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-config\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.166533 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.166566 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.268005 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.268196 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.268227 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.268245 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-824hd\" (UniqueName: \"kubernetes.io/projected/ab307495-25f9-4094-81f6-eea4e1824e09-kube-api-access-824hd\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.268302 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-config\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.268320 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.268340 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.269178 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.269197 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.269182 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.269299 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-config\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.269637 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.269825 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.295160 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-824hd\" (UniqueName: \"kubernetes.io/projected/ab307495-25f9-4094-81f6-eea4e1824e09-kube-api-access-824hd\") pod \"dnsmasq-dns-7d84b4d45c-nk56s\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.414127 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:18:54 crc kubenswrapper[4750]: I0214 14:18:54.961077 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 14 14:18:55 crc kubenswrapper[4750]: I0214 14:18:55.231405 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9ccd0d36-6fed-4aeb-b811-28cf48001750" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.052460 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.061398 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141495 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-erlang-cookie\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141591 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-plugins-conf\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141644 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-config-data\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141693 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ccd0d36-6fed-4aeb-b811-28cf48001750-pod-info\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141741 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ccd0d36-6fed-4aeb-b811-28cf48001750-erlang-cookie-secret\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141807 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-plugins\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141870 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-config-data\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141910 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-confd\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141944 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-erlang-cookie\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.141978 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-plugins-conf\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.142014 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5cpq\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-kube-api-access-l5cpq\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.142106 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-server-conf\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.142165 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-plugins\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.142207 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-tls\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.142321 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj9jv\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-kube-api-access-wj9jv\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.143433 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.143784 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.144906 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.145368 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.148476 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.149457 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.150763 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.153074 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.153170 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-tls\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.153229 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f513b3ee-aa21-48f3-b5fa-395f0557292a-pod-info\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.153272 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-confd\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.153314 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-server-conf\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.153404 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f513b3ee-aa21-48f3-b5fa-395f0557292a-erlang-cookie-secret\") pod \"f513b3ee-aa21-48f3-b5fa-395f0557292a\" (UID: \"f513b3ee-aa21-48f3-b5fa-395f0557292a\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.154771 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.154815 4750 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.154835 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.154855 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.154878 4750 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.154900 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.156797 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccd0d36-6fed-4aeb-b811-28cf48001750-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.179790 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9ccd0d36-6fed-4aeb-b811-28cf48001750-pod-info" (OuterVolumeSpecName: "pod-info") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.195667 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-kube-api-access-l5cpq" (OuterVolumeSpecName: "kube-api-access-l5cpq") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "kube-api-access-l5cpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.205981 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f513b3ee-aa21-48f3-b5fa-395f0557292a-pod-info" (OuterVolumeSpecName: "pod-info") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.207131 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-kube-api-access-wj9jv" (OuterVolumeSpecName: "kube-api-access-wj9jv") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "kube-api-access-wj9jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.207924 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.208941 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f513b3ee-aa21-48f3-b5fa-395f0557292a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.227544 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.259225 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.259257 4750 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f513b3ee-aa21-48f3-b5fa-395f0557292a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.259265 4750 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f513b3ee-aa21-48f3-b5fa-395f0557292a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.259274 4750 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ccd0d36-6fed-4aeb-b811-28cf48001750-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.259283 4750 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ccd0d36-6fed-4aeb-b811-28cf48001750-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.259291 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5cpq\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-kube-api-access-l5cpq\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.259448 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.259456 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj9jv\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-kube-api-access-wj9jv\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.267498 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-config-data" (OuterVolumeSpecName: "config-data") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: E0214 14:19:01.285203 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978 podName:9ccd0d36-6fed-4aeb-b811-28cf48001750 nodeName:}" failed. No retries permitted until 2026-02-14 14:19:01.785169824 +0000 UTC m=+1613.811159305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.309036 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-config-data" (OuterVolumeSpecName: "config-data") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.316830 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631" (OuterVolumeSpecName: "persistence") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "pvc-77c37da7-26ba-4884-ad3f-20168ab37631". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.380901 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.394319 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.394365 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") on node \"crc\" " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.405860 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-server-conf" (OuterVolumeSpecName: "server-conf") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.413676 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-server-conf" (OuterVolumeSpecName: "server-conf") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.479308 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.486935 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f513b3ee-aa21-48f3-b5fa-395f0557292a" (UID: "f513b3ee-aa21-48f3-b5fa-395f0557292a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.497032 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f513b3ee-aa21-48f3-b5fa-395f0557292a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.497067 4750 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ccd0d36-6fed-4aeb-b811-28cf48001750-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.497076 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ccd0d36-6fed-4aeb-b811-28cf48001750-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.497085 4750 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f513b3ee-aa21-48f3-b5fa-395f0557292a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.559466 4750 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.559685 4750 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-77c37da7-26ba-4884-ad3f-20168ab37631" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631") on node "crc" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.598771 4750 reconciler_common.go:293] "Volume detached for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.801672 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"9ccd0d36-6fed-4aeb-b811-28cf48001750\" (UID: \"9ccd0d36-6fed-4aeb-b811-28cf48001750\") " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.827593 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978" (OuterVolumeSpecName: "persistence") pod "9ccd0d36-6fed-4aeb-b811-28cf48001750" (UID: "9ccd0d36-6fed-4aeb-b811-28cf48001750"). InnerVolumeSpecName "pvc-2355e19e-f5b3-472a-9a0d-78024ca47978". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.905363 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") on node \"crc\" " Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.945451 4750 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.945606 4750 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2355e19e-f5b3-472a-9a0d-78024ca47978" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978") on node "crc" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.965900 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f513b3ee-aa21-48f3-b5fa-395f0557292a","Type":"ContainerDied","Data":"a8a073ff2e97b012e16dcd00d36aefb30e8feced59211c0ea5a024d830662efb"} Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.965915 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.965963 4750 scope.go:117] "RemoveContainer" containerID="e3dfaeac3877491d9a6ba7973adcdbda21b465a624a27b5515e9052b76c9e5d2" Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.973308 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9ccd0d36-6fed-4aeb-b811-28cf48001750","Type":"ContainerDied","Data":"3ec6a5c479ebf34e3c4211c7378564992d5f77025702202ee642f11321d96535"} Feb 14 14:19:01 crc kubenswrapper[4750]: I0214 14:19:01.973396 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.009286 4750 reconciler_common.go:293] "Volume detached for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.040241 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.062294 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.081223 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.090088 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.119518 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 14:19:02 crc kubenswrapper[4750]: E0214 14:19:02.119953 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccd0d36-6fed-4aeb-b811-28cf48001750" containerName="rabbitmq" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.119985 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccd0d36-6fed-4aeb-b811-28cf48001750" containerName="rabbitmq" Feb 14 14:19:02 crc kubenswrapper[4750]: E0214 14:19:02.120007 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerName="rabbitmq" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.120013 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerName="rabbitmq" Feb 14 14:19:02 crc kubenswrapper[4750]: E0214 14:19:02.120046 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccd0d36-6fed-4aeb-b811-28cf48001750" containerName="setup-container" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.120053 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccd0d36-6fed-4aeb-b811-28cf48001750" containerName="setup-container" Feb 14 14:19:02 crc kubenswrapper[4750]: E0214 14:19:02.120062 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerName="setup-container" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.120068 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerName="setup-container" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.120279 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccd0d36-6fed-4aeb-b811-28cf48001750" containerName="rabbitmq" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.120299 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f513b3ee-aa21-48f3-b5fa-395f0557292a" containerName="rabbitmq" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.121515 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.124241 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pmvlc" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.124437 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.124543 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.124640 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.124728 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.124838 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.125010 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.155421 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.157647 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.169297 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.187257 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.316989 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317034 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38d75bde-7432-41e0-860c-b2d7219e518a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317161 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38d75bde-7432-41e0-860c-b2d7219e518a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317200 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317226 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0191954f-3b7c-4102-9784-f775fa6e08f2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317261 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0191954f-3b7c-4102-9784-f775fa6e08f2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317283 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0191954f-3b7c-4102-9784-f775fa6e08f2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317336 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317363 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38d75bde-7432-41e0-860c-b2d7219e518a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317380 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38d75bde-7432-41e0-860c-b2d7219e518a-config-data\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317503 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0191954f-3b7c-4102-9784-f775fa6e08f2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317567 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8cr5\" (UniqueName: \"kubernetes.io/projected/38d75bde-7432-41e0-860c-b2d7219e518a-kube-api-access-g8cr5\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317639 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317685 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317720 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317753 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317833 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38d75bde-7432-41e0-860c-b2d7219e518a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317911 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317939 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.317962 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh57f\" (UniqueName: \"kubernetes.io/projected/0191954f-3b7c-4102-9784-f775fa6e08f2-kube-api-access-bh57f\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.318241 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.318310 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0191954f-3b7c-4102-9784-f775fa6e08f2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420238 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38d75bde-7432-41e0-860c-b2d7219e518a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420281 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420297 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0191954f-3b7c-4102-9784-f775fa6e08f2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420315 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0191954f-3b7c-4102-9784-f775fa6e08f2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420336 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0191954f-3b7c-4102-9784-f775fa6e08f2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420362 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420404 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38d75bde-7432-41e0-860c-b2d7219e518a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420420 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38d75bde-7432-41e0-860c-b2d7219e518a-config-data\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420524 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0191954f-3b7c-4102-9784-f775fa6e08f2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420541 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8cr5\" (UniqueName: \"kubernetes.io/projected/38d75bde-7432-41e0-860c-b2d7219e518a-kube-api-access-g8cr5\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420565 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.420970 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.421157 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0191954f-3b7c-4102-9784-f775fa6e08f2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.421643 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38d75bde-7432-41e0-860c-b2d7219e518a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.421704 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38d75bde-7432-41e0-860c-b2d7219e518a-config-data\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.423537 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0191954f-3b7c-4102-9784-f775fa6e08f2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.424370 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.425153 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.428036 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.424546 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.433580 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.433632 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38d75bde-7432-41e0-860c-b2d7219e518a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.433733 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.433772 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.433802 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh57f\" (UniqueName: \"kubernetes.io/projected/0191954f-3b7c-4102-9784-f775fa6e08f2-kube-api-access-bh57f\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.433872 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.434368 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38d75bde-7432-41e0-860c-b2d7219e518a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.434540 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38d75bde-7432-41e0-860c-b2d7219e518a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.434932 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.435015 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0191954f-3b7c-4102-9784-f775fa6e08f2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.435100 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.435152 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38d75bde-7432-41e0-860c-b2d7219e518a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.435179 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.435210 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4154686648465faa147d42c607d238319e8043b2782d62a53f550aeb50ce6488/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.435260 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.436067 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.436094 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d03b6670b43bc96c046bc6da8196f0e4b43abc0290cf3871fba8c6066ca9668/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.436326 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38d75bde-7432-41e0-860c-b2d7219e518a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.436387 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0191954f-3b7c-4102-9784-f775fa6e08f2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.438912 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.439929 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0191954f-3b7c-4102-9784-f775fa6e08f2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.440624 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0191954f-3b7c-4102-9784-f775fa6e08f2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.441887 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8cr5\" (UniqueName: \"kubernetes.io/projected/38d75bde-7432-41e0-860c-b2d7219e518a-kube-api-access-g8cr5\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.442950 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38d75bde-7432-41e0-860c-b2d7219e518a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.442962 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0191954f-3b7c-4102-9784-f775fa6e08f2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.451917 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh57f\" (UniqueName: \"kubernetes.io/projected/0191954f-3b7c-4102-9784-f775fa6e08f2-kube-api-access-bh57f\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.530093 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77c37da7-26ba-4884-ad3f-20168ab37631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77c37da7-26ba-4884-ad3f-20168ab37631\") pod \"rabbitmq-server-2\" (UID: \"38d75bde-7432-41e0-860c-b2d7219e518a\") " pod="openstack/rabbitmq-server-2" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.553560 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2355e19e-f5b3-472a-9a0d-78024ca47978\") pod \"rabbitmq-cell1-server-0\" (UID: \"0191954f-3b7c-4102-9784-f775fa6e08f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.746401 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.758192 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ccd0d36-6fed-4aeb-b811-28cf48001750" path="/var/lib/kubelet/pods/9ccd0d36-6fed-4aeb-b811-28cf48001750/volumes" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.760656 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f513b3ee-aa21-48f3-b5fa-395f0557292a" path="/var/lib/kubelet/pods/f513b3ee-aa21-48f3-b5fa-395f0557292a/volumes" Feb 14 14:19:02 crc kubenswrapper[4750]: I0214 14:19:02.775722 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 14:19:03 crc kubenswrapper[4750]: E0214 14:19:03.977285 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 14 14:19:03 crc kubenswrapper[4750]: E0214 14:19:03.977655 4750 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 14 14:19:03 crc kubenswrapper[4750]: E0214 14:19:03.977815 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr8zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-zrqpc_openstack(15aeb180-f3b4-46a0-847c-5befbf340b51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:19:03 crc kubenswrapper[4750]: E0214 14:19:03.979189 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-zrqpc" podUID="15aeb180-f3b4-46a0-847c-5befbf340b51" Feb 14 14:19:04 crc kubenswrapper[4750]: E0214 14:19:04.011423 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-zrqpc" podUID="15aeb180-f3b4-46a0-847c-5befbf340b51" Feb 14 14:19:04 crc kubenswrapper[4750]: E0214 14:19:04.323162 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 14 14:19:04 crc kubenswrapper[4750]: E0214 14:19:04.323544 4750 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 14 14:19:04 crc kubenswrapper[4750]: E0214 14:19:04.323779 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59h5f8h7dh56bh679h69h659h98h6ch5b7h5f9h647h79h656h58bh674hcdh5fdh567h7bh57dh56h66fhcdh54bh94h5f6h8fh647h59fh59h567q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x44bq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b67f7611-5ec3-4e68-86cf-52b26c4e3b1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 14:19:04 crc kubenswrapper[4750]: I0214 14:19:04.351439 4750 scope.go:117] "RemoveContainer" containerID="f621366506caeb02dee9c6bde64abfe76f9b07ca0844ab51f1565b84455dc23c" Feb 14 14:19:04 crc kubenswrapper[4750]: I0214 14:19:04.404600 4750 scope.go:117] "RemoveContainer" containerID="5d9735d0b4e625dd0eb8a68b36061a57a57676754559f0a692ae6637d69260f0" Feb 14 14:19:04 crc kubenswrapper[4750]: I0214 14:19:04.540069 4750 scope.go:117] "RemoveContainer" containerID="0fccd90bee9500e05f117ba916dd6afb5f3593b6429c3514da328a678356bd23" Feb 14 14:19:04 crc kubenswrapper[4750]: I0214 14:19:04.915588 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nk56s"] Feb 14 14:19:05 crc kubenswrapper[4750]: I0214 14:19:05.027880 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" event={"ID":"ab307495-25f9-4094-81f6-eea4e1824e09","Type":"ContainerStarted","Data":"f7ad011c1c0adefaf125c7bf3988510ab15697b055c2a5b8728d35eb899f0004"} Feb 14 14:19:05 crc kubenswrapper[4750]: I0214 14:19:05.085895 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 14:19:05 crc kubenswrapper[4750]: I0214 14:19:05.113662 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 14:19:05 crc kubenswrapper[4750]: I0214 14:19:05.744100 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:19:05 crc kubenswrapper[4750]: E0214 14:19:05.744652 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:19:06 crc kubenswrapper[4750]: I0214 14:19:06.048074 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"38d75bde-7432-41e0-860c-b2d7219e518a","Type":"ContainerStarted","Data":"991952a5475b9b4d08e5c7c98fd061010f2cc1c34c742cfaa1878245cb500cd5"} Feb 14 14:19:06 crc kubenswrapper[4750]: I0214 14:19:06.050576 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f","Type":"ContainerStarted","Data":"e82cbdfbf0c77cea8114c32e12a40f86ebb3c73f393d36f0cd07c5de2b3bc664"} Feb 14 14:19:06 crc kubenswrapper[4750]: I0214 14:19:06.050610 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f","Type":"ContainerStarted","Data":"bf4483e53eeb643300f151d97a024dfeb7f0560a03e977ecfd0f49a344e69930"} Feb 14 14:19:06 crc kubenswrapper[4750]: I0214 14:19:06.052499 4750 generic.go:334] "Generic (PLEG): container finished" podID="ab307495-25f9-4094-81f6-eea4e1824e09" containerID="3176af268c714de87b1072b6cd055b844b8515aec703e7d16568ba3ff7ea5e66" exitCode=0 Feb 14 14:19:06 crc kubenswrapper[4750]: I0214 14:19:06.053139 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" event={"ID":"ab307495-25f9-4094-81f6-eea4e1824e09","Type":"ContainerDied","Data":"3176af268c714de87b1072b6cd055b844b8515aec703e7d16568ba3ff7ea5e66"} Feb 14 14:19:06 crc kubenswrapper[4750]: I0214 14:19:06.055524 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0191954f-3b7c-4102-9784-f775fa6e08f2","Type":"ContainerStarted","Data":"30a13aba4405b37ed9ce82d688d60352824a458f17eec96e5bcdfb816f793b0a"} Feb 14 14:19:07 crc kubenswrapper[4750]: I0214 14:19:07.071473 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0191954f-3b7c-4102-9784-f775fa6e08f2","Type":"ContainerStarted","Data":"3e078e38fbf5c870a15947563e8c294ab521da594949cf5a378826ccdd717ebe"} Feb 14 14:19:07 crc kubenswrapper[4750]: E0214 14:19:07.531828 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b67f7611-5ec3-4e68-86cf-52b26c4e3b1f" Feb 14 14:19:08 crc kubenswrapper[4750]: I0214 14:19:08.087802 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f","Type":"ContainerStarted","Data":"0ae8cd3d95518485fca877811642b26da36b36d3cec49d69464a306b176391f8"} Feb 14 14:19:08 crc kubenswrapper[4750]: I0214 14:19:08.089051 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 14:19:08 crc kubenswrapper[4750]: E0214 14:19:08.090256 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="b67f7611-5ec3-4e68-86cf-52b26c4e3b1f" Feb 14 14:19:08 crc kubenswrapper[4750]: I0214 14:19:08.092640 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" event={"ID":"ab307495-25f9-4094-81f6-eea4e1824e09","Type":"ContainerStarted","Data":"383fdb87415db4b153b4333aa2fe533355b24a5952b38bb3d6d16dcc53298472"} Feb 14 14:19:08 crc kubenswrapper[4750]: I0214 14:19:08.092807 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:19:08 crc kubenswrapper[4750]: I0214 14:19:08.095883 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"38d75bde-7432-41e0-860c-b2d7219e518a","Type":"ContainerStarted","Data":"d5815b3bd138919d8bc3e41d73491f46a1a868a7d27a3c7b90dc605942fa4dab"} Feb 14 14:19:08 crc kubenswrapper[4750]: I0214 14:19:08.172454 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" podStartSLOduration=14.172437121 podStartE2EDuration="14.172437121s" podCreationTimestamp="2026-02-14 14:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:19:08.164556716 +0000 UTC m=+1620.190546197" watchObservedRunningTime="2026-02-14 14:19:08.172437121 +0000 UTC m=+1620.198426602" Feb 14 14:19:09 crc kubenswrapper[4750]: E0214 14:19:09.110588 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="b67f7611-5ec3-4e68-86cf-52b26c4e3b1f" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.416836 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.486901 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qngbv"] Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.487130 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" podUID="2e75deae-648c-49ef-a28b-3025529a7b0c" containerName="dnsmasq-dns" containerID="cri-o://280dc5d6c9a7128284ab012ac69d59a5f12f047c503d5a32ea999c2c535e060e" gracePeriod=10 Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.681502 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-l7bxg"] Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.683644 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.703815 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-l7bxg"] Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.796553 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.796600 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-config\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.796661 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.796723 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.796852 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9sxl\" (UniqueName: \"kubernetes.io/projected/1d5fe023-9111-4eb5-af17-2efdd4b3a354-kube-api-access-x9sxl\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.796872 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.796918 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.899731 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.899835 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9sxl\" (UniqueName: \"kubernetes.io/projected/1d5fe023-9111-4eb5-af17-2efdd4b3a354-kube-api-access-x9sxl\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.899879 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.899947 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.900017 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.900055 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-config\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.902203 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.902217 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.902911 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.904219 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.904926 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.905019 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-config\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.905772 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d5fe023-9111-4eb5-af17-2efdd4b3a354-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:14 crc kubenswrapper[4750]: I0214 14:19:14.927004 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9sxl\" (UniqueName: \"kubernetes.io/projected/1d5fe023-9111-4eb5-af17-2efdd4b3a354-kube-api-access-x9sxl\") pod \"dnsmasq-dns-6f6df4f56c-l7bxg\" (UID: \"1d5fe023-9111-4eb5-af17-2efdd4b3a354\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.054601 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.219800 4750 generic.go:334] "Generic (PLEG): container finished" podID="2e75deae-648c-49ef-a28b-3025529a7b0c" containerID="280dc5d6c9a7128284ab012ac69d59a5f12f047c503d5a32ea999c2c535e060e" exitCode=0 Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.219861 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" event={"ID":"2e75deae-648c-49ef-a28b-3025529a7b0c","Type":"ContainerDied","Data":"280dc5d6c9a7128284ab012ac69d59a5f12f047c503d5a32ea999c2c535e060e"} Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.219920 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" event={"ID":"2e75deae-648c-49ef-a28b-3025529a7b0c","Type":"ContainerDied","Data":"3fb3ba20796e536ae98115e975e33798f5a0782aed1892f0a7412b965be7668f"} Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.219934 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb3ba20796e536ae98115e975e33798f5a0782aed1892f0a7412b965be7668f" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.229481 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.321027 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-nb\") pod \"2e75deae-648c-49ef-a28b-3025529a7b0c\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.321192 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-config\") pod \"2e75deae-648c-49ef-a28b-3025529a7b0c\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.321275 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-svc\") pod \"2e75deae-648c-49ef-a28b-3025529a7b0c\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.321304 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2lbb\" (UniqueName: \"kubernetes.io/projected/2e75deae-648c-49ef-a28b-3025529a7b0c-kube-api-access-z2lbb\") pod \"2e75deae-648c-49ef-a28b-3025529a7b0c\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.321402 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-swift-storage-0\") pod \"2e75deae-648c-49ef-a28b-3025529a7b0c\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.321424 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-sb\") pod \"2e75deae-648c-49ef-a28b-3025529a7b0c\" (UID: \"2e75deae-648c-49ef-a28b-3025529a7b0c\") " Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.380510 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e75deae-648c-49ef-a28b-3025529a7b0c-kube-api-access-z2lbb" (OuterVolumeSpecName: "kube-api-access-z2lbb") pod "2e75deae-648c-49ef-a28b-3025529a7b0c" (UID: "2e75deae-648c-49ef-a28b-3025529a7b0c"). InnerVolumeSpecName "kube-api-access-z2lbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.422839 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2e75deae-648c-49ef-a28b-3025529a7b0c" (UID: "2e75deae-648c-49ef-a28b-3025529a7b0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.454161 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2e75deae-648c-49ef-a28b-3025529a7b0c" (UID: "2e75deae-648c-49ef-a28b-3025529a7b0c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.455169 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2lbb\" (UniqueName: \"kubernetes.io/projected/2e75deae-648c-49ef-a28b-3025529a7b0c-kube-api-access-z2lbb\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.455203 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.455214 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.508621 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-config" (OuterVolumeSpecName: "config") pod "2e75deae-648c-49ef-a28b-3025529a7b0c" (UID: "2e75deae-648c-49ef-a28b-3025529a7b0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.539821 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e75deae-648c-49ef-a28b-3025529a7b0c" (UID: "2e75deae-648c-49ef-a28b-3025529a7b0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.561075 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.561382 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.619598 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2e75deae-648c-49ef-a28b-3025529a7b0c" (UID: "2e75deae-648c-49ef-a28b-3025529a7b0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.659120 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-l7bxg"] Feb 14 14:19:15 crc kubenswrapper[4750]: I0214 14:19:15.664209 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e75deae-648c-49ef-a28b-3025529a7b0c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:16 crc kubenswrapper[4750]: I0214 14:19:16.232436 4750 generic.go:334] "Generic (PLEG): container finished" podID="1d5fe023-9111-4eb5-af17-2efdd4b3a354" containerID="e7b058404c49236aaaca6b5cfa48413830680b506570fac08b4b14a7c45cffc4" exitCode=0 Feb 14 14:19:16 crc kubenswrapper[4750]: I0214 14:19:16.232539 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" Feb 14 14:19:16 crc kubenswrapper[4750]: I0214 14:19:16.232530 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" event={"ID":"1d5fe023-9111-4eb5-af17-2efdd4b3a354","Type":"ContainerDied","Data":"e7b058404c49236aaaca6b5cfa48413830680b506570fac08b4b14a7c45cffc4"} Feb 14 14:19:16 crc kubenswrapper[4750]: I0214 14:19:16.232615 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" event={"ID":"1d5fe023-9111-4eb5-af17-2efdd4b3a354","Type":"ContainerStarted","Data":"979991e14c25e33b13f8ec31b06ff0a3318bfd96bd3c156cca0bd05addd5c9a8"} Feb 14 14:19:16 crc kubenswrapper[4750]: I0214 14:19:16.379150 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qngbv"] Feb 14 14:19:16 crc kubenswrapper[4750]: I0214 14:19:16.405216 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qngbv"] Feb 14 14:19:16 crc kubenswrapper[4750]: I0214 14:19:16.757335 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e75deae-648c-49ef-a28b-3025529a7b0c" path="/var/lib/kubelet/pods/2e75deae-648c-49ef-a28b-3025529a7b0c/volumes" Feb 14 14:19:17 crc kubenswrapper[4750]: I0214 14:19:17.244784 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" event={"ID":"1d5fe023-9111-4eb5-af17-2efdd4b3a354","Type":"ContainerStarted","Data":"3d753ab6bb37bcaeec0b0d1d1822e53408508352b445d03884c70e7b724248cf"} Feb 14 14:19:17 crc kubenswrapper[4750]: I0214 14:19:17.245153 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:17 crc kubenswrapper[4750]: I0214 14:19:17.268645 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" podStartSLOduration=3.268621963 podStartE2EDuration="3.268621963s" podCreationTimestamp="2026-02-14 14:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:19:17.262422926 +0000 UTC m=+1629.288412407" watchObservedRunningTime="2026-02-14 14:19:17.268621963 +0000 UTC m=+1629.294611464" Feb 14 14:19:17 crc kubenswrapper[4750]: I0214 14:19:17.741763 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:19:17 crc kubenswrapper[4750]: E0214 14:19:17.742052 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:19:19 crc kubenswrapper[4750]: I0214 14:19:19.272735 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zrqpc" event={"ID":"15aeb180-f3b4-46a0-847c-5befbf340b51","Type":"ContainerStarted","Data":"a7f9965ab1ae6fcb5e4a95f0e24fe14c198796121536c79296e4669d819ad326"} Feb 14 14:19:19 crc kubenswrapper[4750]: I0214 14:19:19.295459 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-zrqpc" podStartSLOduration=2.519098847 podStartE2EDuration="41.295437009s" podCreationTimestamp="2026-02-14 14:18:38 +0000 UTC" firstStartedPulling="2026-02-14 14:18:39.177913505 +0000 UTC m=+1591.203902996" lastFinishedPulling="2026-02-14 14:19:17.954251677 +0000 UTC m=+1629.980241158" observedRunningTime="2026-02-14 14:19:19.293232746 +0000 UTC m=+1631.319222247" watchObservedRunningTime="2026-02-14 14:19:19.295437009 +0000 UTC m=+1631.321426500" Feb 14 14:19:20 crc kubenswrapper[4750]: I0214 14:19:20.059251 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qngbv" podUID="2e75deae-648c-49ef-a28b-3025529a7b0c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.3:5353: i/o timeout" Feb 14 14:19:20 crc kubenswrapper[4750]: I0214 14:19:20.760730 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 14:19:21 crc kubenswrapper[4750]: I0214 14:19:21.155890 4750 scope.go:117] "RemoveContainer" containerID="8b34863cd61bc4fcce09abffb22cb82f6d883f9352c7c43d3d04edbff8a50221" Feb 14 14:19:21 crc kubenswrapper[4750]: I0214 14:19:21.296272 4750 generic.go:334] "Generic (PLEG): container finished" podID="15aeb180-f3b4-46a0-847c-5befbf340b51" containerID="a7f9965ab1ae6fcb5e4a95f0e24fe14c198796121536c79296e4669d819ad326" exitCode=0 Feb 14 14:19:21 crc kubenswrapper[4750]: I0214 14:19:21.296363 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zrqpc" event={"ID":"15aeb180-f3b4-46a0-847c-5befbf340b51","Type":"ContainerDied","Data":"a7f9965ab1ae6fcb5e4a95f0e24fe14c198796121536c79296e4669d819ad326"} Feb 14 14:19:22 crc kubenswrapper[4750]: I0214 14:19:22.315235 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b67f7611-5ec3-4e68-86cf-52b26c4e3b1f","Type":"ContainerStarted","Data":"0df27e135d1db17f740738f33714625556b8f94d7e2a34a70bc873b40684d577"} Feb 14 14:19:22 crc kubenswrapper[4750]: I0214 14:19:22.339009 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.31700999 podStartE2EDuration="40.338993076s" podCreationTimestamp="2026-02-14 14:18:42 +0000 UTC" firstStartedPulling="2026-02-14 14:18:43.928323467 +0000 UTC m=+1595.954312948" lastFinishedPulling="2026-02-14 14:19:20.950306513 +0000 UTC m=+1632.976296034" observedRunningTime="2026-02-14 14:19:22.334322343 +0000 UTC m=+1634.360311824" watchObservedRunningTime="2026-02-14 14:19:22.338993076 +0000 UTC m=+1634.364982557" Feb 14 14:19:22 crc kubenswrapper[4750]: I0214 14:19:22.856657 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zrqpc" Feb 14 14:19:22 crc kubenswrapper[4750]: I0214 14:19:22.959202 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-config-data\") pod \"15aeb180-f3b4-46a0-847c-5befbf340b51\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " Feb 14 14:19:22 crc kubenswrapper[4750]: I0214 14:19:22.959450 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr8zq\" (UniqueName: \"kubernetes.io/projected/15aeb180-f3b4-46a0-847c-5befbf340b51-kube-api-access-sr8zq\") pod \"15aeb180-f3b4-46a0-847c-5befbf340b51\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " Feb 14 14:19:22 crc kubenswrapper[4750]: I0214 14:19:22.959531 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-combined-ca-bundle\") pod \"15aeb180-f3b4-46a0-847c-5befbf340b51\" (UID: \"15aeb180-f3b4-46a0-847c-5befbf340b51\") " Feb 14 14:19:22 crc kubenswrapper[4750]: I0214 14:19:22.965545 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15aeb180-f3b4-46a0-847c-5befbf340b51-kube-api-access-sr8zq" (OuterVolumeSpecName: "kube-api-access-sr8zq") pod "15aeb180-f3b4-46a0-847c-5befbf340b51" (UID: "15aeb180-f3b4-46a0-847c-5befbf340b51"). InnerVolumeSpecName "kube-api-access-sr8zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:22 crc kubenswrapper[4750]: I0214 14:19:22.990766 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15aeb180-f3b4-46a0-847c-5befbf340b51" (UID: "15aeb180-f3b4-46a0-847c-5befbf340b51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:23 crc kubenswrapper[4750]: I0214 14:19:23.062770 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:23 crc kubenswrapper[4750]: I0214 14:19:23.062802 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr8zq\" (UniqueName: \"kubernetes.io/projected/15aeb180-f3b4-46a0-847c-5befbf340b51-kube-api-access-sr8zq\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:23 crc kubenswrapper[4750]: I0214 14:19:23.068813 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-config-data" (OuterVolumeSpecName: "config-data") pod "15aeb180-f3b4-46a0-847c-5befbf340b51" (UID: "15aeb180-f3b4-46a0-847c-5befbf340b51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:23 crc kubenswrapper[4750]: I0214 14:19:23.167920 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15aeb180-f3b4-46a0-847c-5befbf340b51-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:23 crc kubenswrapper[4750]: I0214 14:19:23.333575 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zrqpc" event={"ID":"15aeb180-f3b4-46a0-847c-5befbf340b51","Type":"ContainerDied","Data":"703305b59980b010dad01a07ab0d325c9ec6d034b3266a271b409d391fc82a05"} Feb 14 14:19:23 crc kubenswrapper[4750]: I0214 14:19:23.333646 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="703305b59980b010dad01a07ab0d325c9ec6d034b3266a271b409d391fc82a05" Feb 14 14:19:23 crc kubenswrapper[4750]: I0214 14:19:23.333613 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zrqpc" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.406946 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-d98b7d7bf-rc97s"] Feb 14 14:19:24 crc kubenswrapper[4750]: E0214 14:19:24.407642 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e75deae-648c-49ef-a28b-3025529a7b0c" containerName="init" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.407655 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e75deae-648c-49ef-a28b-3025529a7b0c" containerName="init" Feb 14 14:19:24 crc kubenswrapper[4750]: E0214 14:19:24.407697 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e75deae-648c-49ef-a28b-3025529a7b0c" containerName="dnsmasq-dns" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.407703 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e75deae-648c-49ef-a28b-3025529a7b0c" containerName="dnsmasq-dns" Feb 14 14:19:24 crc kubenswrapper[4750]: E0214 14:19:24.407718 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15aeb180-f3b4-46a0-847c-5befbf340b51" containerName="heat-db-sync" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.407725 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="15aeb180-f3b4-46a0-847c-5befbf340b51" containerName="heat-db-sync" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.407928 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="15aeb180-f3b4-46a0-847c-5befbf340b51" containerName="heat-db-sync" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.407950 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e75deae-648c-49ef-a28b-3025529a7b0c" containerName="dnsmasq-dns" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.408777 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.432167 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d98b7d7bf-rc97s"] Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.498055 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-combined-ca-bundle\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.498106 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-config-data-custom\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.498414 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsbw\" (UniqueName: \"kubernetes.io/projected/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-kube-api-access-wlsbw\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.498487 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-config-data\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.517285 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5cddcdd877-f796q"] Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.518979 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.537185 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6fd8cd6df7-qsknx"] Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.538908 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.563263 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6fd8cd6df7-qsknx"] Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.579570 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cddcdd877-f796q"] Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614141 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-combined-ca-bundle\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614206 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-internal-tls-certs\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614243 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-config-data-custom\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614269 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-combined-ca-bundle\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614299 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chsqs\" (UniqueName: \"kubernetes.io/projected/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-kube-api-access-chsqs\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614328 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-config-data-custom\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614366 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddvf\" (UniqueName: \"kubernetes.io/projected/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-kube-api-access-nddvf\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614403 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-config-data-custom\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614452 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsbw\" (UniqueName: \"kubernetes.io/projected/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-kube-api-access-wlsbw\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614471 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-config-data\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614498 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-config-data\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614593 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-internal-tls-certs\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614649 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-public-tls-certs\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614691 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-public-tls-certs\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614713 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-combined-ca-bundle\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.614742 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-config-data\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.626704 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-config-data\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.630889 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-config-data-custom\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.636012 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsbw\" (UniqueName: \"kubernetes.io/projected/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-kube-api-access-wlsbw\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.637246 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf-combined-ca-bundle\") pod \"heat-engine-d98b7d7bf-rc97s\" (UID: \"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf\") " pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717095 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-internal-tls-certs\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717190 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-public-tls-certs\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717222 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-public-tls-certs\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717243 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-combined-ca-bundle\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717262 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-config-data\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717304 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-internal-tls-certs\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717327 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-combined-ca-bundle\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717350 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chsqs\" (UniqueName: \"kubernetes.io/projected/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-kube-api-access-chsqs\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717371 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-config-data-custom\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717397 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddvf\" (UniqueName: \"kubernetes.io/projected/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-kube-api-access-nddvf\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717424 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-config-data-custom\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.717465 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-config-data\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.722738 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-internal-tls-certs\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.722814 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-config-data\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.723814 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-config-data-custom\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.725240 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-config-data-custom\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.725878 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-config-data\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.726757 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-public-tls-certs\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.731816 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-combined-ca-bundle\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.732526 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.735861 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-combined-ca-bundle\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.738172 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-internal-tls-certs\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.739984 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-public-tls-certs\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.742963 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddvf\" (UniqueName: \"kubernetes.io/projected/d7bb0ba4-eebd-41f7-935f-b9ba2a635618-kube-api-access-nddvf\") pod \"heat-api-5cddcdd877-f796q\" (UID: \"d7bb0ba4-eebd-41f7-935f-b9ba2a635618\") " pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.747722 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chsqs\" (UniqueName: \"kubernetes.io/projected/5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5-kube-api-access-chsqs\") pod \"heat-cfnapi-6fd8cd6df7-qsknx\" (UID: \"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5\") " pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.851059 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:24 crc kubenswrapper[4750]: I0214 14:19:24.864490 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.055887 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-l7bxg" Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.138347 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nk56s"] Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.138578 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" podUID="ab307495-25f9-4094-81f6-eea4e1824e09" containerName="dnsmasq-dns" containerID="cri-o://383fdb87415db4b153b4333aa2fe533355b24a5952b38bb3d6d16dcc53298472" gracePeriod=10 Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.363436 4750 generic.go:334] "Generic (PLEG): container finished" podID="ab307495-25f9-4094-81f6-eea4e1824e09" containerID="383fdb87415db4b153b4333aa2fe533355b24a5952b38bb3d6d16dcc53298472" exitCode=0 Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.363481 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" event={"ID":"ab307495-25f9-4094-81f6-eea4e1824e09","Type":"ContainerDied","Data":"383fdb87415db4b153b4333aa2fe533355b24a5952b38bb3d6d16dcc53298472"} Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.491384 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cddcdd877-f796q"] Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.525280 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d98b7d7bf-rc97s"] Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.771267 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6fd8cd6df7-qsknx"] Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.892416 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.958418 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-config\") pod \"ab307495-25f9-4094-81f6-eea4e1824e09\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.958839 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-nb\") pod \"ab307495-25f9-4094-81f6-eea4e1824e09\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.959708 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-openstack-edpm-ipam\") pod \"ab307495-25f9-4094-81f6-eea4e1824e09\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.959888 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-svc\") pod \"ab307495-25f9-4094-81f6-eea4e1824e09\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.960055 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-824hd\" (UniqueName: \"kubernetes.io/projected/ab307495-25f9-4094-81f6-eea4e1824e09-kube-api-access-824hd\") pod \"ab307495-25f9-4094-81f6-eea4e1824e09\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.960170 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-sb\") pod \"ab307495-25f9-4094-81f6-eea4e1824e09\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.960475 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-swift-storage-0\") pod \"ab307495-25f9-4094-81f6-eea4e1824e09\" (UID: \"ab307495-25f9-4094-81f6-eea4e1824e09\") " Feb 14 14:19:25 crc kubenswrapper[4750]: I0214 14:19:25.984727 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab307495-25f9-4094-81f6-eea4e1824e09-kube-api-access-824hd" (OuterVolumeSpecName: "kube-api-access-824hd") pod "ab307495-25f9-4094-81f6-eea4e1824e09" (UID: "ab307495-25f9-4094-81f6-eea4e1824e09"). InnerVolumeSpecName "kube-api-access-824hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.060602 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ab307495-25f9-4094-81f6-eea4e1824e09" (UID: "ab307495-25f9-4094-81f6-eea4e1824e09"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.060844 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ab307495-25f9-4094-81f6-eea4e1824e09" (UID: "ab307495-25f9-4094-81f6-eea4e1824e09"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.067170 4750 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.067214 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-824hd\" (UniqueName: \"kubernetes.io/projected/ab307495-25f9-4094-81f6-eea4e1824e09-kube-api-access-824hd\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.067229 4750 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.078235 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab307495-25f9-4094-81f6-eea4e1824e09" (UID: "ab307495-25f9-4094-81f6-eea4e1824e09"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.084607 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab307495-25f9-4094-81f6-eea4e1824e09" (UID: "ab307495-25f9-4094-81f6-eea4e1824e09"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.097003 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab307495-25f9-4094-81f6-eea4e1824e09" (UID: "ab307495-25f9-4094-81f6-eea4e1824e09"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.105928 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-config" (OuterVolumeSpecName: "config") pod "ab307495-25f9-4094-81f6-eea4e1824e09" (UID: "ab307495-25f9-4094-81f6-eea4e1824e09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.169648 4750 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.169694 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.169706 4750 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-config\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.169716 4750 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab307495-25f9-4094-81f6-eea4e1824e09-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.379383 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" event={"ID":"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5","Type":"ContainerStarted","Data":"a2df5a1e005871c4f191f8b4b4138f63ed38023d627a47291620bd42222fe447"} Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.381785 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d98b7d7bf-rc97s" event={"ID":"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf","Type":"ContainerStarted","Data":"d3e28add34071c5b13efe123bca0cf9c5606bf3439a3ffeff7c43bba9dbe5bef"} Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.381831 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d98b7d7bf-rc97s" event={"ID":"7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf","Type":"ContainerStarted","Data":"5de4abe75b71dc421c2ca3c855139a5c0a2e7073f3f521c7e455087e4f017c57"} Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.381979 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.384776 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" event={"ID":"ab307495-25f9-4094-81f6-eea4e1824e09","Type":"ContainerDied","Data":"f7ad011c1c0adefaf125c7bf3988510ab15697b055c2a5b8728d35eb899f0004"} Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.384809 4750 scope.go:117] "RemoveContainer" containerID="383fdb87415db4b153b4333aa2fe533355b24a5952b38bb3d6d16dcc53298472" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.384900 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-nk56s" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.386030 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cddcdd877-f796q" event={"ID":"d7bb0ba4-eebd-41f7-935f-b9ba2a635618","Type":"ContainerStarted","Data":"0a3f7f99535974661254891d8c62b4e01df83c5abd9fcdca9ed7f818615e0cc2"} Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.420561 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-d98b7d7bf-rc97s" podStartSLOduration=2.420540441 podStartE2EDuration="2.420540441s" podCreationTimestamp="2026-02-14 14:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:19:26.401947912 +0000 UTC m=+1638.427937393" watchObservedRunningTime="2026-02-14 14:19:26.420540441 +0000 UTC m=+1638.446529922" Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.448055 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nk56s"] Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.464170 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nk56s"] Feb 14 14:19:26 crc kubenswrapper[4750]: I0214 14:19:26.762485 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab307495-25f9-4094-81f6-eea4e1824e09" path="/var/lib/kubelet/pods/ab307495-25f9-4094-81f6-eea4e1824e09/volumes" Feb 14 14:19:27 crc kubenswrapper[4750]: I0214 14:19:27.182931 4750 scope.go:117] "RemoveContainer" containerID="3176af268c714de87b1072b6cd055b844b8515aec703e7d16568ba3ff7ea5e66" Feb 14 14:19:28 crc kubenswrapper[4750]: I0214 14:19:28.412087 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cddcdd877-f796q" event={"ID":"d7bb0ba4-eebd-41f7-935f-b9ba2a635618","Type":"ContainerStarted","Data":"f66a985212b390c3ad62125c2ebd9ecfe4156968b6981c82e9229a131ff47522"} Feb 14 14:19:28 crc kubenswrapper[4750]: I0214 14:19:28.412937 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:28 crc kubenswrapper[4750]: I0214 14:19:28.413872 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" event={"ID":"5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5","Type":"ContainerStarted","Data":"49401733c63aab2e30b4b2fa8a2c40ae7166e18020a17e64546dcf342c35c001"} Feb 14 14:19:28 crc kubenswrapper[4750]: I0214 14:19:28.414201 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:28 crc kubenswrapper[4750]: I0214 14:19:28.435795 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5cddcdd877-f796q" podStartSLOduration=2.730277012 podStartE2EDuration="4.435778087s" podCreationTimestamp="2026-02-14 14:19:24 +0000 UTC" firstStartedPulling="2026-02-14 14:19:25.495546028 +0000 UTC m=+1637.521535509" lastFinishedPulling="2026-02-14 14:19:27.201047103 +0000 UTC m=+1639.227036584" observedRunningTime="2026-02-14 14:19:28.429856728 +0000 UTC m=+1640.455846199" watchObservedRunningTime="2026-02-14 14:19:28.435778087 +0000 UTC m=+1640.461767578" Feb 14 14:19:28 crc kubenswrapper[4750]: I0214 14:19:28.764826 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:19:28 crc kubenswrapper[4750]: E0214 14:19:28.765440 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:19:36 crc kubenswrapper[4750]: I0214 14:19:36.478984 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5cddcdd877-f796q" Feb 14 14:19:36 crc kubenswrapper[4750]: I0214 14:19:36.496034 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" Feb 14 14:19:36 crc kubenswrapper[4750]: I0214 14:19:36.505319 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6fd8cd6df7-qsknx" podStartSLOduration=11.062759611 podStartE2EDuration="12.505299815s" podCreationTimestamp="2026-02-14 14:19:24 +0000 UTC" firstStartedPulling="2026-02-14 14:19:25.78106842 +0000 UTC m=+1637.807057901" lastFinishedPulling="2026-02-14 14:19:27.223608614 +0000 UTC m=+1639.249598105" observedRunningTime="2026-02-14 14:19:28.472038788 +0000 UTC m=+1640.498028279" watchObservedRunningTime="2026-02-14 14:19:36.505299815 +0000 UTC m=+1648.531289296" Feb 14 14:19:36 crc kubenswrapper[4750]: I0214 14:19:36.561249 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-77656b9796-ff5r4"] Feb 14 14:19:36 crc kubenswrapper[4750]: I0214 14:19:36.561496 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-77656b9796-ff5r4" podUID="1e1b3da1-2a06-4b24-86e6-921864918a8e" containerName="heat-api" containerID="cri-o://7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8" gracePeriod=60 Feb 14 14:19:36 crc kubenswrapper[4750]: I0214 14:19:36.602716 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84668977c4-sql9c"] Feb 14 14:19:36 crc kubenswrapper[4750]: I0214 14:19:36.602934 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-84668977c4-sql9c" podUID="02f8868c-0e0f-4e75-9ef2-d74188d5fcda" containerName="heat-cfnapi" containerID="cri-o://4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293" gracePeriod=60 Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.793875 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl"] Feb 14 14:19:38 crc kubenswrapper[4750]: E0214 14:19:38.795020 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab307495-25f9-4094-81f6-eea4e1824e09" containerName="init" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.795036 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab307495-25f9-4094-81f6-eea4e1824e09" containerName="init" Feb 14 14:19:38 crc kubenswrapper[4750]: E0214 14:19:38.795067 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab307495-25f9-4094-81f6-eea4e1824e09" containerName="dnsmasq-dns" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.795076 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab307495-25f9-4094-81f6-eea4e1824e09" containerName="dnsmasq-dns" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.795422 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab307495-25f9-4094-81f6-eea4e1824e09" containerName="dnsmasq-dns" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.797450 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.800813 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.800825 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.801906 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.802923 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.818212 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl"] Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.908100 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.908196 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.908244 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:38 crc kubenswrapper[4750]: I0214 14:19:38.908514 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbbrt\" (UniqueName: \"kubernetes.io/projected/579b6931-42c7-4a8f-9045-b9b993aa3fbd-kube-api-access-qbbrt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.010953 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbbrt\" (UniqueName: \"kubernetes.io/projected/579b6931-42c7-4a8f-9045-b9b993aa3fbd-kube-api-access-qbbrt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.011299 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.011445 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.011600 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.017361 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.017705 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.024025 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.033508 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbbrt\" (UniqueName: \"kubernetes.io/projected/579b6931-42c7-4a8f-9045-b9b993aa3fbd-kube-api-access-qbbrt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.136392 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.560317 4750 generic.go:334] "Generic (PLEG): container finished" podID="0191954f-3b7c-4102-9784-f775fa6e08f2" containerID="3e078e38fbf5c870a15947563e8c294ab521da594949cf5a378826ccdd717ebe" exitCode=0 Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.560388 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0191954f-3b7c-4102-9784-f775fa6e08f2","Type":"ContainerDied","Data":"3e078e38fbf5c870a15947563e8c294ab521da594949cf5a378826ccdd717ebe"} Feb 14 14:19:39 crc kubenswrapper[4750]: I0214 14:19:39.991326 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl"] Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.562529 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.574838 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.576659 4750 generic.go:334] "Generic (PLEG): container finished" podID="1e1b3da1-2a06-4b24-86e6-921864918a8e" containerID="7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8" exitCode=0 Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.576704 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77656b9796-ff5r4" event={"ID":"1e1b3da1-2a06-4b24-86e6-921864918a8e","Type":"ContainerDied","Data":"7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8"} Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.576751 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77656b9796-ff5r4" event={"ID":"1e1b3da1-2a06-4b24-86e6-921864918a8e","Type":"ContainerDied","Data":"d8787dc9c117fa3763e58750a45ac36fc2140b1c1abda414951a41bb5b6f7686"} Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.576767 4750 scope.go:117] "RemoveContainer" containerID="7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.576937 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77656b9796-ff5r4" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.603544 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0191954f-3b7c-4102-9784-f775fa6e08f2","Type":"ContainerStarted","Data":"2262033990f43f6d89a42e057785229ae575e3aa7f07c72dc0ccd9ced01f0283"} Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.604865 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.609814 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" event={"ID":"579b6931-42c7-4a8f-9045-b9b993aa3fbd","Type":"ContainerStarted","Data":"68a74c12b39c80be7b12af88cef579c57101d2d406eda2fc92b474b57dd5207c"} Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.622873 4750 generic.go:334] "Generic (PLEG): container finished" podID="38d75bde-7432-41e0-860c-b2d7219e518a" containerID="d5815b3bd138919d8bc3e41d73491f46a1a868a7d27a3c7b90dc605942fa4dab" exitCode=0 Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.622998 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"38d75bde-7432-41e0-860c-b2d7219e518a","Type":"ContainerDied","Data":"d5815b3bd138919d8bc3e41d73491f46a1a868a7d27a3c7b90dc605942fa4dab"} Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.626452 4750 generic.go:334] "Generic (PLEG): container finished" podID="02f8868c-0e0f-4e75-9ef2-d74188d5fcda" containerID="4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293" exitCode=0 Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.626487 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84668977c4-sql9c" event={"ID":"02f8868c-0e0f-4e75-9ef2-d74188d5fcda","Type":"ContainerDied","Data":"4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293"} Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.626507 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84668977c4-sql9c" event={"ID":"02f8868c-0e0f-4e75-9ef2-d74188d5fcda","Type":"ContainerDied","Data":"6cabedd0535a9948909e6f7f34cfdbfcbfc0c86ba6b3333c9ff644ab5d2761ba"} Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.626557 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84668977c4-sql9c" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.629962 4750 scope.go:117] "RemoveContainer" containerID="7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8" Feb 14 14:19:40 crc kubenswrapper[4750]: E0214 14:19:40.634683 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8\": container with ID starting with 7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8 not found: ID does not exist" containerID="7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.634722 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8"} err="failed to get container status \"7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8\": rpc error: code = NotFound desc = could not find container \"7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8\": container with ID starting with 7f8816e65a7863e20473f91cff4e2ed71e387c5f4ef83e5afcbc77f2cb6a52d8 not found: ID does not exist" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.634746 4750 scope.go:117] "RemoveContainer" containerID="4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.678185 4750 scope.go:117] "RemoveContainer" containerID="4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.679258 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.679234288 podStartE2EDuration="38.679234288s" podCreationTimestamp="2026-02-14 14:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:19:40.635288828 +0000 UTC m=+1652.661278309" watchObservedRunningTime="2026-02-14 14:19:40.679234288 +0000 UTC m=+1652.705223769" Feb 14 14:19:40 crc kubenswrapper[4750]: E0214 14:19:40.684100 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293\": container with ID starting with 4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293 not found: ID does not exist" containerID="4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.684342 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293"} err="failed to get container status \"4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293\": rpc error: code = NotFound desc = could not find container \"4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293\": container with ID starting with 4b77afd03f8bb65d26959065d291fc3bd8c903523c93c1cfcfc5efd65f6cd293 not found: ID does not exist" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.772062 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwshx\" (UniqueName: \"kubernetes.io/projected/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-kube-api-access-xwshx\") pod \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.772295 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95m7g\" (UniqueName: \"kubernetes.io/projected/1e1b3da1-2a06-4b24-86e6-921864918a8e-kube-api-access-95m7g\") pod \"1e1b3da1-2a06-4b24-86e6-921864918a8e\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.772435 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-internal-tls-certs\") pod \"1e1b3da1-2a06-4b24-86e6-921864918a8e\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.772527 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-combined-ca-bundle\") pod \"1e1b3da1-2a06-4b24-86e6-921864918a8e\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.772636 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data-custom\") pod \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.772712 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-public-tls-certs\") pod \"1e1b3da1-2a06-4b24-86e6-921864918a8e\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.772791 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data\") pod \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.772910 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-internal-tls-certs\") pod \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.773223 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data\") pod \"1e1b3da1-2a06-4b24-86e6-921864918a8e\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.773319 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-public-tls-certs\") pod \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.773409 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data-custom\") pod \"1e1b3da1-2a06-4b24-86e6-921864918a8e\" (UID: \"1e1b3da1-2a06-4b24-86e6-921864918a8e\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.773475 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-combined-ca-bundle\") pod \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\" (UID: \"02f8868c-0e0f-4e75-9ef2-d74188d5fcda\") " Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.775935 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-kube-api-access-xwshx" (OuterVolumeSpecName: "kube-api-access-xwshx") pod "02f8868c-0e0f-4e75-9ef2-d74188d5fcda" (UID: "02f8868c-0e0f-4e75-9ef2-d74188d5fcda"). InnerVolumeSpecName "kube-api-access-xwshx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.783569 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1e1b3da1-2a06-4b24-86e6-921864918a8e" (UID: "1e1b3da1-2a06-4b24-86e6-921864918a8e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.783707 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e1b3da1-2a06-4b24-86e6-921864918a8e-kube-api-access-95m7g" (OuterVolumeSpecName: "kube-api-access-95m7g") pod "1e1b3da1-2a06-4b24-86e6-921864918a8e" (UID: "1e1b3da1-2a06-4b24-86e6-921864918a8e"). InnerVolumeSpecName "kube-api-access-95m7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.783774 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "02f8868c-0e0f-4e75-9ef2-d74188d5fcda" (UID: "02f8868c-0e0f-4e75-9ef2-d74188d5fcda"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.825659 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e1b3da1-2a06-4b24-86e6-921864918a8e" (UID: "1e1b3da1-2a06-4b24-86e6-921864918a8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.844656 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02f8868c-0e0f-4e75-9ef2-d74188d5fcda" (UID: "02f8868c-0e0f-4e75-9ef2-d74188d5fcda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.878389 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.878419 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.878428 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.878436 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.878445 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwshx\" (UniqueName: \"kubernetes.io/projected/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-kube-api-access-xwshx\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.878455 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95m7g\" (UniqueName: \"kubernetes.io/projected/1e1b3da1-2a06-4b24-86e6-921864918a8e-kube-api-access-95m7g\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.886427 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1e1b3da1-2a06-4b24-86e6-921864918a8e" (UID: "1e1b3da1-2a06-4b24-86e6-921864918a8e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.886507 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "02f8868c-0e0f-4e75-9ef2-d74188d5fcda" (UID: "02f8868c-0e0f-4e75-9ef2-d74188d5fcda"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.919396 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1e1b3da1-2a06-4b24-86e6-921864918a8e" (UID: "1e1b3da1-2a06-4b24-86e6-921864918a8e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.922320 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "02f8868c-0e0f-4e75-9ef2-d74188d5fcda" (UID: "02f8868c-0e0f-4e75-9ef2-d74188d5fcda"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.933344 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data" (OuterVolumeSpecName: "config-data") pod "02f8868c-0e0f-4e75-9ef2-d74188d5fcda" (UID: "02f8868c-0e0f-4e75-9ef2-d74188d5fcda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.941944 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data" (OuterVolumeSpecName: "config-data") pod "1e1b3da1-2a06-4b24-86e6-921864918a8e" (UID: "1e1b3da1-2a06-4b24-86e6-921864918a8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.983054 4750 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.983102 4750 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.983145 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.983154 4750 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.983162 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1b3da1-2a06-4b24-86e6-921864918a8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:40 crc kubenswrapper[4750]: I0214 14:19:40.983170 4750 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f8868c-0e0f-4e75-9ef2-d74188d5fcda-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:19:41 crc kubenswrapper[4750]: I0214 14:19:41.224311 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-77656b9796-ff5r4"] Feb 14 14:19:41 crc kubenswrapper[4750]: I0214 14:19:41.242439 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-77656b9796-ff5r4"] Feb 14 14:19:41 crc kubenswrapper[4750]: I0214 14:19:41.272188 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84668977c4-sql9c"] Feb 14 14:19:41 crc kubenswrapper[4750]: I0214 14:19:41.297029 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-84668977c4-sql9c"] Feb 14 14:19:41 crc kubenswrapper[4750]: I0214 14:19:41.642840 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"38d75bde-7432-41e0-860c-b2d7219e518a","Type":"ContainerStarted","Data":"a533bac869716a5b76aff06f3b7e6d1731fbf0c2bcd8cdd7ca410724ddbf0124"} Feb 14 14:19:41 crc kubenswrapper[4750]: I0214 14:19:41.643102 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 14 14:19:41 crc kubenswrapper[4750]: I0214 14:19:41.685596 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=39.685577544 podStartE2EDuration="39.685577544s" podCreationTimestamp="2026-02-14 14:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:19:41.671755931 +0000 UTC m=+1653.697745412" watchObservedRunningTime="2026-02-14 14:19:41.685577544 +0000 UTC m=+1653.711567025" Feb 14 14:19:42 crc kubenswrapper[4750]: I0214 14:19:42.802965 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f8868c-0e0f-4e75-9ef2-d74188d5fcda" path="/var/lib/kubelet/pods/02f8868c-0e0f-4e75-9ef2-d74188d5fcda/volumes" Feb 14 14:19:42 crc kubenswrapper[4750]: I0214 14:19:42.804126 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e1b3da1-2a06-4b24-86e6-921864918a8e" path="/var/lib/kubelet/pods/1e1b3da1-2a06-4b24-86e6-921864918a8e/volumes" Feb 14 14:19:43 crc kubenswrapper[4750]: I0214 14:19:43.742247 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:19:43 crc kubenswrapper[4750]: E0214 14:19:43.743600 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:19:44 crc kubenswrapper[4750]: I0214 14:19:44.782560 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-d98b7d7bf-rc97s" Feb 14 14:19:44 crc kubenswrapper[4750]: I0214 14:19:44.840856 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-59b797bddd-xm4wn"] Feb 14 14:19:44 crc kubenswrapper[4750]: I0214 14:19:44.842278 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-59b797bddd-xm4wn" podUID="fd095088-0ec4-428e-bfef-11c7c57ecfdb" containerName="heat-engine" containerID="cri-o://1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" gracePeriod=60 Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.475019 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-nlb47"] Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.503166 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-nlb47"] Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.567152 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-qpbmh"] Feb 14 14:19:48 crc kubenswrapper[4750]: E0214 14:19:48.567748 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1b3da1-2a06-4b24-86e6-921864918a8e" containerName="heat-api" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.567768 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1b3da1-2a06-4b24-86e6-921864918a8e" containerName="heat-api" Feb 14 14:19:48 crc kubenswrapper[4750]: E0214 14:19:48.567806 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f8868c-0e0f-4e75-9ef2-d74188d5fcda" containerName="heat-cfnapi" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.567815 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f8868c-0e0f-4e75-9ef2-d74188d5fcda" containerName="heat-cfnapi" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.568044 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f8868c-0e0f-4e75-9ef2-d74188d5fcda" containerName="heat-cfnapi" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.568067 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e1b3da1-2a06-4b24-86e6-921864918a8e" containerName="heat-api" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.568917 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.570938 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.588188 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qpbmh"] Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.702215 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fspc\" (UniqueName: \"kubernetes.io/projected/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-kube-api-access-4fspc\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.702414 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-combined-ca-bundle\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.702526 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-config-data\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.702596 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-scripts\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.763678 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="233e8b69-480e-4c6b-a127-e0c8b21616ed" path="/var/lib/kubelet/pods/233e8b69-480e-4c6b-a127-e0c8b21616ed/volumes" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.804376 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fspc\" (UniqueName: \"kubernetes.io/projected/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-kube-api-access-4fspc\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.804482 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-combined-ca-bundle\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.804567 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-config-data\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.804619 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-scripts\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.810696 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-combined-ca-bundle\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.812182 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-config-data\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.822897 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-scripts\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.824883 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fspc\" (UniqueName: \"kubernetes.io/projected/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-kube-api-access-4fspc\") pod \"aodh-db-sync-qpbmh\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:48 crc kubenswrapper[4750]: I0214 14:19:48.906637 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:19:52 crc kubenswrapper[4750]: I0214 14:19:52.138646 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qpbmh"] Feb 14 14:19:52 crc kubenswrapper[4750]: E0214 14:19:52.207172 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 14:19:52 crc kubenswrapper[4750]: E0214 14:19:52.208705 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 14:19:52 crc kubenswrapper[4750]: E0214 14:19:52.209926 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 14:19:52 crc kubenswrapper[4750]: E0214 14:19:52.209952 4750 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-59b797bddd-xm4wn" podUID="fd095088-0ec4-428e-bfef-11c7c57ecfdb" containerName="heat-engine" Feb 14 14:19:52 crc kubenswrapper[4750]: I0214 14:19:52.763947 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 14:19:52 crc kubenswrapper[4750]: I0214 14:19:52.780427 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 14 14:19:52 crc kubenswrapper[4750]: I0214 14:19:52.906637 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 14:19:52 crc kubenswrapper[4750]: I0214 14:19:52.949080 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" event={"ID":"579b6931-42c7-4a8f-9045-b9b993aa3fbd","Type":"ContainerStarted","Data":"b5853c35be5dfc39626309b6a8dc7512bb68d91c11feb539b93ce1dec8b2a9b9"} Feb 14 14:19:52 crc kubenswrapper[4750]: I0214 14:19:52.959683 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qpbmh" event={"ID":"5bce0a3c-19ee-4338-bed6-63e2a01bf3de","Type":"ContainerStarted","Data":"909fbe1d536e866d25f0c2ded7a57897130d964d01e414686896e2d5568b884c"} Feb 14 14:19:52 crc kubenswrapper[4750]: I0214 14:19:52.974694 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" podStartSLOduration=3.2938212399999998 podStartE2EDuration="14.974680577s" podCreationTimestamp="2026-02-14 14:19:38 +0000 UTC" firstStartedPulling="2026-02-14 14:19:40.004283168 +0000 UTC m=+1652.030272649" lastFinishedPulling="2026-02-14 14:19:51.685142505 +0000 UTC m=+1663.711131986" observedRunningTime="2026-02-14 14:19:52.974219684 +0000 UTC m=+1665.000209165" watchObservedRunningTime="2026-02-14 14:19:52.974680577 +0000 UTC m=+1665.000670058" Feb 14 14:19:57 crc kubenswrapper[4750]: I0214 14:19:57.295788 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="cb0d3b08-53c1-4396-9413-bf581fad715a" containerName="rabbitmq" containerID="cri-o://4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9" gracePeriod=604796 Feb 14 14:19:58 crc kubenswrapper[4750]: I0214 14:19:58.017570 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qpbmh" event={"ID":"5bce0a3c-19ee-4338-bed6-63e2a01bf3de","Type":"ContainerStarted","Data":"e6eb83d078e2f91bf8a61029bdbd9ef94c21719bdff8a2fb4e93ceebe278e0e2"} Feb 14 14:19:58 crc kubenswrapper[4750]: I0214 14:19:58.054064 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-qpbmh" podStartSLOduration=4.584373436 podStartE2EDuration="10.054046657s" podCreationTimestamp="2026-02-14 14:19:48 +0000 UTC" firstStartedPulling="2026-02-14 14:19:52.13562055 +0000 UTC m=+1664.161610031" lastFinishedPulling="2026-02-14 14:19:57.605293771 +0000 UTC m=+1669.631283252" observedRunningTime="2026-02-14 14:19:58.04639876 +0000 UTC m=+1670.072388251" watchObservedRunningTime="2026-02-14 14:19:58.054046657 +0000 UTC m=+1670.080036138" Feb 14 14:19:58 crc kubenswrapper[4750]: I0214 14:19:58.755284 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:19:58 crc kubenswrapper[4750]: E0214 14:19:58.755660 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:20:01 crc kubenswrapper[4750]: I0214 14:20:01.054888 4750 generic.go:334] "Generic (PLEG): container finished" podID="5bce0a3c-19ee-4338-bed6-63e2a01bf3de" containerID="e6eb83d078e2f91bf8a61029bdbd9ef94c21719bdff8a2fb4e93ceebe278e0e2" exitCode=0 Feb 14 14:20:01 crc kubenswrapper[4750]: I0214 14:20:01.055448 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qpbmh" event={"ID":"5bce0a3c-19ee-4338-bed6-63e2a01bf3de","Type":"ContainerDied","Data":"e6eb83d078e2f91bf8a61029bdbd9ef94c21719bdff8a2fb4e93ceebe278e0e2"} Feb 14 14:20:02 crc kubenswrapper[4750]: E0214 14:20:02.207727 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 14:20:02 crc kubenswrapper[4750]: E0214 14:20:02.209008 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 14:20:02 crc kubenswrapper[4750]: E0214 14:20:02.211431 4750 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 14:20:02 crc kubenswrapper[4750]: E0214 14:20:02.211492 4750 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-59b797bddd-xm4wn" podUID="fd095088-0ec4-428e-bfef-11c7c57ecfdb" containerName="heat-engine" Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.537151 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.686598 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fspc\" (UniqueName: \"kubernetes.io/projected/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-kube-api-access-4fspc\") pod \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.686694 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-combined-ca-bundle\") pod \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.686819 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-scripts\") pod \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.686971 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-config-data\") pod \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\" (UID: \"5bce0a3c-19ee-4338-bed6-63e2a01bf3de\") " Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.693017 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-scripts" (OuterVolumeSpecName: "scripts") pod "5bce0a3c-19ee-4338-bed6-63e2a01bf3de" (UID: "5bce0a3c-19ee-4338-bed6-63e2a01bf3de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.693098 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-kube-api-access-4fspc" (OuterVolumeSpecName: "kube-api-access-4fspc") pod "5bce0a3c-19ee-4338-bed6-63e2a01bf3de" (UID: "5bce0a3c-19ee-4338-bed6-63e2a01bf3de"). InnerVolumeSpecName "kube-api-access-4fspc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.719333 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-config-data" (OuterVolumeSpecName: "config-data") pod "5bce0a3c-19ee-4338-bed6-63e2a01bf3de" (UID: "5bce0a3c-19ee-4338-bed6-63e2a01bf3de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.756030 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bce0a3c-19ee-4338-bed6-63e2a01bf3de" (UID: "5bce0a3c-19ee-4338-bed6-63e2a01bf3de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.789677 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.789724 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fspc\" (UniqueName: \"kubernetes.io/projected/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-kube-api-access-4fspc\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.789744 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:02 crc kubenswrapper[4750]: I0214 14:20:02.789757 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bce0a3c-19ee-4338-bed6-63e2a01bf3de-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:03 crc kubenswrapper[4750]: I0214 14:20:03.080577 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qpbmh" event={"ID":"5bce0a3c-19ee-4338-bed6-63e2a01bf3de","Type":"ContainerDied","Data":"909fbe1d536e866d25f0c2ded7a57897130d964d01e414686896e2d5568b884c"} Feb 14 14:20:03 crc kubenswrapper[4750]: I0214 14:20:03.080805 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="909fbe1d536e866d25f0c2ded7a57897130d964d01e414686896e2d5568b884c" Feb 14 14:20:03 crc kubenswrapper[4750]: I0214 14:20:03.080627 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qpbmh" Feb 14 14:20:03 crc kubenswrapper[4750]: I0214 14:20:03.554429 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 14:20:03 crc kubenswrapper[4750]: I0214 14:20:03.554717 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-api" containerID="cri-o://953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71" gracePeriod=30 Feb 14 14:20:03 crc kubenswrapper[4750]: I0214 14:20:03.555133 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-notifier" containerID="cri-o://9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940" gracePeriod=30 Feb 14 14:20:03 crc kubenswrapper[4750]: I0214 14:20:03.555292 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-listener" containerID="cri-o://455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88" gracePeriod=30 Feb 14 14:20:03 crc kubenswrapper[4750]: I0214 14:20:03.555417 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-evaluator" containerID="cri-o://c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb" gracePeriod=30 Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.047774 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.140513 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-plugins\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.140577 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-erlang-cookie\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.140614 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-plugins-conf\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.140664 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-server-conf\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.140735 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-confd\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.140795 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-config-data\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.141718 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.142281 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.142772 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.167509 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.167714 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn9lh\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-kube-api-access-sn9lh\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.167754 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-tls\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.167861 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb0d3b08-53c1-4396-9413-bf581fad715a-pod-info\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.167943 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb0d3b08-53c1-4396-9413-bf581fad715a-erlang-cookie-secret\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.170499 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.170568 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.170595 4750 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.188382 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0d3b08-53c1-4396-9413-bf581fad715a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.188764 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-kube-api-access-sn9lh" (OuterVolumeSpecName: "kube-api-access-sn9lh") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "kube-api-access-sn9lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.195963 4750 generic.go:334] "Generic (PLEG): container finished" podID="579b6931-42c7-4a8f-9045-b9b993aa3fbd" containerID="b5853c35be5dfc39626309b6a8dc7512bb68d91c11feb539b93ce1dec8b2a9b9" exitCode=0 Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.197209 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" event={"ID":"579b6931-42c7-4a8f-9045-b9b993aa3fbd","Type":"ContainerDied","Data":"b5853c35be5dfc39626309b6a8dc7512bb68d91c11feb539b93ce1dec8b2a9b9"} Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.199169 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: E0214 14:20:04.217493 4750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e podName:cb0d3b08-53c1-4396-9413-bf581fad715a nodeName:}" failed. No retries permitted until 2026-02-14 14:20:04.717465493 +0000 UTC m=+1676.743454974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.229535 4750 generic.go:334] "Generic (PLEG): container finished" podID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerID="c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb" exitCode=0 Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.229568 4750 generic.go:334] "Generic (PLEG): container finished" podID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerID="953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71" exitCode=0 Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.229626 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerDied","Data":"c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb"} Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.229651 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerDied","Data":"953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71"} Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.233311 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cb0d3b08-53c1-4396-9413-bf581fad715a-pod-info" (OuterVolumeSpecName: "pod-info") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.263659 4750 generic.go:334] "Generic (PLEG): container finished" podID="cb0d3b08-53c1-4396-9413-bf581fad715a" containerID="4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9" exitCode=0 Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.263705 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cb0d3b08-53c1-4396-9413-bf581fad715a","Type":"ContainerDied","Data":"4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9"} Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.263731 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cb0d3b08-53c1-4396-9413-bf581fad715a","Type":"ContainerDied","Data":"7824950b151ce476d25e8060abb6eb9f43220493343d9131f04a73b06a671c81"} Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.263761 4750 scope.go:117] "RemoveContainer" containerID="4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.263985 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.279086 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn9lh\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-kube-api-access-sn9lh\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.279128 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.279138 4750 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb0d3b08-53c1-4396-9413-bf581fad715a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.279156 4750 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb0d3b08-53c1-4396-9413-bf581fad715a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.280520 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-config-data" (OuterVolumeSpecName: "config-data") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.317773 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-server-conf" (OuterVolumeSpecName: "server-conf") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.342797 4750 scope.go:117] "RemoveContainer" containerID="9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.381515 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.381545 4750 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb0d3b08-53c1-4396-9413-bf581fad715a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.385828 4750 scope.go:117] "RemoveContainer" containerID="4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9" Feb 14 14:20:04 crc kubenswrapper[4750]: E0214 14:20:04.386287 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9\": container with ID starting with 4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9 not found: ID does not exist" containerID="4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.386331 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9"} err="failed to get container status \"4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9\": rpc error: code = NotFound desc = could not find container \"4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9\": container with ID starting with 4ffbbe6cfa904b5f096d3cf2c930a02d898724d01dda391a38606fe36d1bf9a9 not found: ID does not exist" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.386357 4750 scope.go:117] "RemoveContainer" containerID="9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb" Feb 14 14:20:04 crc kubenswrapper[4750]: E0214 14:20:04.386561 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb\": container with ID starting with 9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb not found: ID does not exist" containerID="9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.386597 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb"} err="failed to get container status \"9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb\": rpc error: code = NotFound desc = could not find container \"9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb\": container with ID starting with 9c083915c6ea9f8f676dd57ddc8dd1b009bc1104969ea1bc7b4b529aeb680abb not found: ID does not exist" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.403736 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.483331 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb0d3b08-53c1-4396-9413-bf581fad715a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.660956 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.788737 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data\") pod \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.789223 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-combined-ca-bundle\") pod \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.789432 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data-custom\") pod \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.789538 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"cb0d3b08-53c1-4396-9413-bf581fad715a\" (UID: \"cb0d3b08-53c1-4396-9413-bf581fad715a\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.790098 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8c8h\" (UniqueName: \"kubernetes.io/projected/fd095088-0ec4-428e-bfef-11c7c57ecfdb-kube-api-access-z8c8h\") pod \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\" (UID: \"fd095088-0ec4-428e-bfef-11c7c57ecfdb\") " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.798556 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fd095088-0ec4-428e-bfef-11c7c57ecfdb" (UID: "fd095088-0ec4-428e-bfef-11c7c57ecfdb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.801359 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd095088-0ec4-428e-bfef-11c7c57ecfdb-kube-api-access-z8c8h" (OuterVolumeSpecName: "kube-api-access-z8c8h") pod "fd095088-0ec4-428e-bfef-11c7c57ecfdb" (UID: "fd095088-0ec4-428e-bfef-11c7c57ecfdb"). InnerVolumeSpecName "kube-api-access-z8c8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.817514 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e" (OuterVolumeSpecName: "persistence") pod "cb0d3b08-53c1-4396-9413-bf581fad715a" (UID: "cb0d3b08-53c1-4396-9413-bf581fad715a"). InnerVolumeSpecName "pvc-b7191dbc-b270-422c-b7b5-232458126a4e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.832507 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd095088-0ec4-428e-bfef-11c7c57ecfdb" (UID: "fd095088-0ec4-428e-bfef-11c7c57ecfdb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.866957 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data" (OuterVolumeSpecName: "config-data") pod "fd095088-0ec4-428e-bfef-11c7c57ecfdb" (UID: "fd095088-0ec4-428e-bfef-11c7c57ecfdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.893780 4750 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.893833 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") on node \"crc\" " Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.893846 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8c8h\" (UniqueName: \"kubernetes.io/projected/fd095088-0ec4-428e-bfef-11c7c57ecfdb-kube-api-access-z8c8h\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.893858 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.893869 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd095088-0ec4-428e-bfef-11c7c57ecfdb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.921429 4750 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.921567 4750 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b7191dbc-b270-422c-b7b5-232458126a4e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e") on node "crc" Feb 14 14:20:04 crc kubenswrapper[4750]: I0214 14:20:04.995647 4750 reconciler_common.go:293] "Volume detached for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.002979 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.014125 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.038975 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 14:20:05 crc kubenswrapper[4750]: E0214 14:20:05.039473 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0d3b08-53c1-4396-9413-bf581fad715a" containerName="rabbitmq" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.039491 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0d3b08-53c1-4396-9413-bf581fad715a" containerName="rabbitmq" Feb 14 14:20:05 crc kubenswrapper[4750]: E0214 14:20:05.039527 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bce0a3c-19ee-4338-bed6-63e2a01bf3de" containerName="aodh-db-sync" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.039534 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bce0a3c-19ee-4338-bed6-63e2a01bf3de" containerName="aodh-db-sync" Feb 14 14:20:05 crc kubenswrapper[4750]: E0214 14:20:05.039545 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0d3b08-53c1-4396-9413-bf581fad715a" containerName="setup-container" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.039552 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0d3b08-53c1-4396-9413-bf581fad715a" containerName="setup-container" Feb 14 14:20:05 crc kubenswrapper[4750]: E0214 14:20:05.039562 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd095088-0ec4-428e-bfef-11c7c57ecfdb" containerName="heat-engine" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.039568 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd095088-0ec4-428e-bfef-11c7c57ecfdb" containerName="heat-engine" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.039769 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd095088-0ec4-428e-bfef-11c7c57ecfdb" containerName="heat-engine" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.039795 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0d3b08-53c1-4396-9413-bf581fad715a" containerName="rabbitmq" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.039811 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bce0a3c-19ee-4338-bed6-63e2a01bf3de" containerName="aodh-db-sync" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.041053 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.060470 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097429 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f096636-5e6f-428e-8a30-9433d6ac312c-config-data\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097490 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097512 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3f096636-5e6f-428e-8a30-9433d6ac312c-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097533 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097556 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3f096636-5e6f-428e-8a30-9433d6ac312c-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097599 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097614 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3f096636-5e6f-428e-8a30-9433d6ac312c-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097678 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3f096636-5e6f-428e-8a30-9433d6ac312c-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097699 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgbt4\" (UniqueName: \"kubernetes.io/projected/3f096636-5e6f-428e-8a30-9433d6ac312c-kube-api-access-kgbt4\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097739 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.097805 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.199582 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3f096636-5e6f-428e-8a30-9433d6ac312c-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.199627 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgbt4\" (UniqueName: \"kubernetes.io/projected/3f096636-5e6f-428e-8a30-9433d6ac312c-kube-api-access-kgbt4\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.199669 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.199735 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.199818 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f096636-5e6f-428e-8a30-9433d6ac312c-config-data\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.199871 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.199893 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3f096636-5e6f-428e-8a30-9433d6ac312c-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.200244 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.200620 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3f096636-5e6f-428e-8a30-9433d6ac312c-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.200709 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f096636-5e6f-428e-8a30-9433d6ac312c-config-data\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.200780 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3f096636-5e6f-428e-8a30-9433d6ac312c-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.200840 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.200845 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.200905 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3f096636-5e6f-428e-8a30-9433d6ac312c-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.201054 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.202005 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3f096636-5e6f-428e-8a30-9433d6ac312c-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.206317 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3f096636-5e6f-428e-8a30-9433d6ac312c-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.206422 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.206431 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3f096636-5e6f-428e-8a30-9433d6ac312c-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.206980 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3f096636-5e6f-428e-8a30-9433d6ac312c-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.207070 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.207094 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8bb78f0012df4d86c020d4d63ca325a5a4ecea0fafe79b486510d70a930ce40b/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.219219 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgbt4\" (UniqueName: \"kubernetes.io/projected/3f096636-5e6f-428e-8a30-9433d6ac312c-kube-api-access-kgbt4\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.281139 4750 generic.go:334] "Generic (PLEG): container finished" podID="fd095088-0ec4-428e-bfef-11c7c57ecfdb" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" exitCode=0 Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.281353 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-59b797bddd-xm4wn" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.283669 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-59b797bddd-xm4wn" event={"ID":"fd095088-0ec4-428e-bfef-11c7c57ecfdb","Type":"ContainerDied","Data":"1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1"} Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.283726 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-59b797bddd-xm4wn" event={"ID":"fd095088-0ec4-428e-bfef-11c7c57ecfdb","Type":"ContainerDied","Data":"eb7ba83f1ab3b095221719ae811ff45233935ea826f4a8e9587c6a689dbcbdb5"} Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.283747 4750 scope.go:117] "RemoveContainer" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.292201 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b7191dbc-b270-422c-b7b5-232458126a4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7191dbc-b270-422c-b7b5-232458126a4e\") pod \"rabbitmq-server-1\" (UID: \"3f096636-5e6f-428e-8a30-9433d6ac312c\") " pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.334512 4750 scope.go:117] "RemoveContainer" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" Feb 14 14:20:05 crc kubenswrapper[4750]: E0214 14:20:05.335262 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1\": container with ID starting with 1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1 not found: ID does not exist" containerID="1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.335309 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1"} err="failed to get container status \"1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1\": rpc error: code = NotFound desc = could not find container \"1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1\": container with ID starting with 1626dec4d74e7b95a0d895019812c4be2690ff36160d286974124e54857136f1 not found: ID does not exist" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.338155 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-59b797bddd-xm4wn"] Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.348866 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-59b797bddd-xm4wn"] Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.359346 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 14:20:05 crc kubenswrapper[4750]: I0214 14:20:05.950536 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.054481 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.146483 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-inventory\") pod \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.146568 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-ssh-key-openstack-edpm-ipam\") pod \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.146777 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-repo-setup-combined-ca-bundle\") pod \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.146818 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbbrt\" (UniqueName: \"kubernetes.io/projected/579b6931-42c7-4a8f-9045-b9b993aa3fbd-kube-api-access-qbbrt\") pod \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\" (UID: \"579b6931-42c7-4a8f-9045-b9b993aa3fbd\") " Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.151480 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579b6931-42c7-4a8f-9045-b9b993aa3fbd-kube-api-access-qbbrt" (OuterVolumeSpecName: "kube-api-access-qbbrt") pod "579b6931-42c7-4a8f-9045-b9b993aa3fbd" (UID: "579b6931-42c7-4a8f-9045-b9b993aa3fbd"). InnerVolumeSpecName "kube-api-access-qbbrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.151679 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "579b6931-42c7-4a8f-9045-b9b993aa3fbd" (UID: "579b6931-42c7-4a8f-9045-b9b993aa3fbd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.195902 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-inventory" (OuterVolumeSpecName: "inventory") pod "579b6931-42c7-4a8f-9045-b9b993aa3fbd" (UID: "579b6931-42c7-4a8f-9045-b9b993aa3fbd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.197779 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "579b6931-42c7-4a8f-9045-b9b993aa3fbd" (UID: "579b6931-42c7-4a8f-9045-b9b993aa3fbd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.250868 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.250905 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.250941 4750 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b6931-42c7-4a8f-9045-b9b993aa3fbd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.250954 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbbrt\" (UniqueName: \"kubernetes.io/projected/579b6931-42c7-4a8f-9045-b9b993aa3fbd-kube-api-access-qbbrt\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.306765 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" event={"ID":"579b6931-42c7-4a8f-9045-b9b993aa3fbd","Type":"ContainerDied","Data":"68a74c12b39c80be7b12af88cef579c57101d2d406eda2fc92b474b57dd5207c"} Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.306780 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.306804 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a74c12b39c80be7b12af88cef579c57101d2d406eda2fc92b474b57dd5207c" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.317886 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7"] Feb 14 14:20:06 crc kubenswrapper[4750]: E0214 14:20:06.318456 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579b6931-42c7-4a8f-9045-b9b993aa3fbd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.318484 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="579b6931-42c7-4a8f-9045-b9b993aa3fbd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.318842 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="579b6931-42c7-4a8f-9045-b9b993aa3fbd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.322551 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.322840 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3f096636-5e6f-428e-8a30-9433d6ac312c","Type":"ContainerStarted","Data":"ff6b31e5e2d1b11a9390836753ba673c76362017b4b9ad5aa581950a1d827de3"} Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.327297 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.328031 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.328231 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.328451 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.331781 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7"] Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.359947 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xtjr7\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.360040 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wb84\" (UniqueName: \"kubernetes.io/projected/17baaf13-3126-48d1-a32e-522bf2bf43ff-kube-api-access-8wb84\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xtjr7\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.360073 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xtjr7\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.462096 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xtjr7\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.462188 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wb84\" (UniqueName: \"kubernetes.io/projected/17baaf13-3126-48d1-a32e-522bf2bf43ff-kube-api-access-8wb84\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xtjr7\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.462220 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xtjr7\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.465772 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xtjr7\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.466077 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xtjr7\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.479274 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wb84\" (UniqueName: \"kubernetes.io/projected/17baaf13-3126-48d1-a32e-522bf2bf43ff-kube-api-access-8wb84\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xtjr7\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.671130 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.768165 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0d3b08-53c1-4396-9413-bf581fad715a" path="/var/lib/kubelet/pods/cb0d3b08-53c1-4396-9413-bf581fad715a/volumes" Feb 14 14:20:06 crc kubenswrapper[4750]: I0214 14:20:06.769371 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd095088-0ec4-428e-bfef-11c7c57ecfdb" path="/var/lib/kubelet/pods/fd095088-0ec4-428e-bfef-11c7c57ecfdb/volumes" Feb 14 14:20:07 crc kubenswrapper[4750]: I0214 14:20:07.272906 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7"] Feb 14 14:20:07 crc kubenswrapper[4750]: W0214 14:20:07.367895 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17baaf13_3126_48d1_a32e_522bf2bf43ff.slice/crio-aaafb7696dd8fb56ca99ec950388ad3e49e14a5d49727876fa68cdd63810b7b7 WatchSource:0}: Error finding container aaafb7696dd8fb56ca99ec950388ad3e49e14a5d49727876fa68cdd63810b7b7: Status 404 returned error can't find the container with id aaafb7696dd8fb56ca99ec950388ad3e49e14a5d49727876fa68cdd63810b7b7 Feb 14 14:20:08 crc kubenswrapper[4750]: I0214 14:20:08.355670 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3f096636-5e6f-428e-8a30-9433d6ac312c","Type":"ContainerStarted","Data":"cbd83ccd6edda259840f3ed773130d53d0d138ed4ea4fe99f632c151ece22699"} Feb 14 14:20:08 crc kubenswrapper[4750]: I0214 14:20:08.380513 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" event={"ID":"17baaf13-3126-48d1-a32e-522bf2bf43ff","Type":"ContainerStarted","Data":"85b3d4120988dd68182951e523b7ccfa00672ce44acdf9fecdbd8d0d802038b3"} Feb 14 14:20:08 crc kubenswrapper[4750]: I0214 14:20:08.380606 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" event={"ID":"17baaf13-3126-48d1-a32e-522bf2bf43ff","Type":"ContainerStarted","Data":"aaafb7696dd8fb56ca99ec950388ad3e49e14a5d49727876fa68cdd63810b7b7"} Feb 14 14:20:08 crc kubenswrapper[4750]: I0214 14:20:08.465879 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" podStartSLOduration=2.070730374 podStartE2EDuration="2.465862544s" podCreationTimestamp="2026-02-14 14:20:06 +0000 UTC" firstStartedPulling="2026-02-14 14:20:07.37031306 +0000 UTC m=+1679.396302541" lastFinishedPulling="2026-02-14 14:20:07.76544523 +0000 UTC m=+1679.791434711" observedRunningTime="2026-02-14 14:20:08.458625468 +0000 UTC m=+1680.484614959" watchObservedRunningTime="2026-02-14 14:20:08.465862544 +0000 UTC m=+1680.491852025" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.324056 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.397336 4750 generic.go:334] "Generic (PLEG): container finished" podID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerID="455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88" exitCode=0 Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.397366 4750 generic.go:334] "Generic (PLEG): container finished" podID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerID="9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940" exitCode=0 Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.397419 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.397463 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerDied","Data":"455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88"} Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.397488 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerDied","Data":"9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940"} Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.397497 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"63273a72-38f5-4186-8e5f-ac589414fbd6","Type":"ContainerDied","Data":"f519cdcb9c51d12a1bcf715483103aa71fb0ac56fc26d623c2ee996c25ac4273"} Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.397510 4750 scope.go:117] "RemoveContainer" containerID="455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.441228 4750 scope.go:117] "RemoveContainer" containerID="9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.477722 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s79q\" (UniqueName: \"kubernetes.io/projected/63273a72-38f5-4186-8e5f-ac589414fbd6-kube-api-access-5s79q\") pod \"63273a72-38f5-4186-8e5f-ac589414fbd6\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.477879 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-combined-ca-bundle\") pod \"63273a72-38f5-4186-8e5f-ac589414fbd6\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.477900 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-config-data\") pod \"63273a72-38f5-4186-8e5f-ac589414fbd6\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.477984 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-internal-tls-certs\") pod \"63273a72-38f5-4186-8e5f-ac589414fbd6\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.478103 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-scripts\") pod \"63273a72-38f5-4186-8e5f-ac589414fbd6\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.478181 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-public-tls-certs\") pod \"63273a72-38f5-4186-8e5f-ac589414fbd6\" (UID: \"63273a72-38f5-4186-8e5f-ac589414fbd6\") " Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.483473 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63273a72-38f5-4186-8e5f-ac589414fbd6-kube-api-access-5s79q" (OuterVolumeSpecName: "kube-api-access-5s79q") pod "63273a72-38f5-4186-8e5f-ac589414fbd6" (UID: "63273a72-38f5-4186-8e5f-ac589414fbd6"). InnerVolumeSpecName "kube-api-access-5s79q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.486419 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-scripts" (OuterVolumeSpecName: "scripts") pod "63273a72-38f5-4186-8e5f-ac589414fbd6" (UID: "63273a72-38f5-4186-8e5f-ac589414fbd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.503313 4750 scope.go:117] "RemoveContainer" containerID="c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.559996 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "63273a72-38f5-4186-8e5f-ac589414fbd6" (UID: "63273a72-38f5-4186-8e5f-ac589414fbd6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.563024 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "63273a72-38f5-4186-8e5f-ac589414fbd6" (UID: "63273a72-38f5-4186-8e5f-ac589414fbd6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.581675 4750 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.581859 4750 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.581919 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s79q\" (UniqueName: \"kubernetes.io/projected/63273a72-38f5-4186-8e5f-ac589414fbd6-kube-api-access-5s79q\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.582000 4750 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.630931 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-config-data" (OuterVolumeSpecName: "config-data") pod "63273a72-38f5-4186-8e5f-ac589414fbd6" (UID: "63273a72-38f5-4186-8e5f-ac589414fbd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.637221 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63273a72-38f5-4186-8e5f-ac589414fbd6" (UID: "63273a72-38f5-4186-8e5f-ac589414fbd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.683654 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.683685 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63273a72-38f5-4186-8e5f-ac589414fbd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.692530 4750 scope.go:117] "RemoveContainer" containerID="953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.720014 4750 scope.go:117] "RemoveContainer" containerID="455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88" Feb 14 14:20:09 crc kubenswrapper[4750]: E0214 14:20:09.726776 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88\": container with ID starting with 455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88 not found: ID does not exist" containerID="455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.726824 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88"} err="failed to get container status \"455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88\": rpc error: code = NotFound desc = could not find container \"455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88\": container with ID starting with 455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88 not found: ID does not exist" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.726848 4750 scope.go:117] "RemoveContainer" containerID="9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940" Feb 14 14:20:09 crc kubenswrapper[4750]: E0214 14:20:09.727131 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940\": container with ID starting with 9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940 not found: ID does not exist" containerID="9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.727163 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940"} err="failed to get container status \"9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940\": rpc error: code = NotFound desc = could not find container \"9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940\": container with ID starting with 9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940 not found: ID does not exist" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.727190 4750 scope.go:117] "RemoveContainer" containerID="c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb" Feb 14 14:20:09 crc kubenswrapper[4750]: E0214 14:20:09.727423 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb\": container with ID starting with c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb not found: ID does not exist" containerID="c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728011 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb"} err="failed to get container status \"c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb\": rpc error: code = NotFound desc = could not find container \"c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb\": container with ID starting with c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb not found: ID does not exist" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728035 4750 scope.go:117] "RemoveContainer" containerID="953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71" Feb 14 14:20:09 crc kubenswrapper[4750]: E0214 14:20:09.728290 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71\": container with ID starting with 953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71 not found: ID does not exist" containerID="953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728313 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71"} err="failed to get container status \"953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71\": rpc error: code = NotFound desc = could not find container \"953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71\": container with ID starting with 953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71 not found: ID does not exist" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728328 4750 scope.go:117] "RemoveContainer" containerID="455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728498 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88"} err="failed to get container status \"455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88\": rpc error: code = NotFound desc = could not find container \"455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88\": container with ID starting with 455b66cf91d72c54070249483ed28c6b2cda5e3d0dba2a0a51c700774e4fae88 not found: ID does not exist" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728515 4750 scope.go:117] "RemoveContainer" containerID="9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728720 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940"} err="failed to get container status \"9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940\": rpc error: code = NotFound desc = could not find container \"9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940\": container with ID starting with 9a2ac08d068d51d9fc0c3a7929b0a93723fc74c533f91e14b0dbaa7b845af940 not found: ID does not exist" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728743 4750 scope.go:117] "RemoveContainer" containerID="c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728906 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb"} err="failed to get container status \"c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb\": rpc error: code = NotFound desc = could not find container \"c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb\": container with ID starting with c70f46d6dcd693b5b69a7e895727716fc7534c5ca43888328659cacfe84ffcdb not found: ID does not exist" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.728922 4750 scope.go:117] "RemoveContainer" containerID="953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.729298 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71"} err="failed to get container status \"953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71\": rpc error: code = NotFound desc = could not find container \"953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71\": container with ID starting with 953db87d5a3ff3a7a6f508bd94bb2f5204267b790087dac13466641d85b56a71 not found: ID does not exist" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.731230 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.743093 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.766184 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 14:20:09 crc kubenswrapper[4750]: E0214 14:20:09.766702 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-listener" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.766727 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-listener" Feb 14 14:20:09 crc kubenswrapper[4750]: E0214 14:20:09.766755 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-evaluator" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.766762 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-evaluator" Feb 14 14:20:09 crc kubenswrapper[4750]: E0214 14:20:09.766784 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-api" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.766790 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-api" Feb 14 14:20:09 crc kubenswrapper[4750]: E0214 14:20:09.766807 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-notifier" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.766812 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-notifier" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.767012 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-notifier" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.767023 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-listener" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.767042 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-evaluator" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.767053 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" containerName="aodh-api" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.769581 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.771964 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.772215 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.772367 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-ktj55" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.772495 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.772629 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.788383 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.889950 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-config-data\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.890039 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-public-tls-certs\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.890127 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-internal-tls-certs\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.890207 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6xks\" (UniqueName: \"kubernetes.io/projected/4fbf10ca-3c0c-4779-b08f-212a47db3302-kube-api-access-w6xks\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.890298 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.890909 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-scripts\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.994465 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-internal-tls-certs\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.994823 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6xks\" (UniqueName: \"kubernetes.io/projected/4fbf10ca-3c0c-4779-b08f-212a47db3302-kube-api-access-w6xks\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.995148 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.995348 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-scripts\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.995673 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-config-data\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:09 crc kubenswrapper[4750]: I0214 14:20:09.995788 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-public-tls-certs\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.001945 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-scripts\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.002032 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-internal-tls-certs\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.002574 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-public-tls-certs\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.002637 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-config-data\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.002736 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbf10ca-3c0c-4779-b08f-212a47db3302-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.013285 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6xks\" (UniqueName: \"kubernetes.io/projected/4fbf10ca-3c0c-4779-b08f-212a47db3302-kube-api-access-w6xks\") pod \"aodh-0\" (UID: \"4fbf10ca-3c0c-4779-b08f-212a47db3302\") " pod="openstack/aodh-0" Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.086577 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.666931 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.742554 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:20:10 crc kubenswrapper[4750]: E0214 14:20:10.742871 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:20:10 crc kubenswrapper[4750]: I0214 14:20:10.758812 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63273a72-38f5-4186-8e5f-ac589414fbd6" path="/var/lib/kubelet/pods/63273a72-38f5-4186-8e5f-ac589414fbd6/volumes" Feb 14 14:20:11 crc kubenswrapper[4750]: I0214 14:20:11.424928 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4fbf10ca-3c0c-4779-b08f-212a47db3302","Type":"ContainerStarted","Data":"5e486920b277b1c561ef70a1d62f0a56cfb3b2d30129576b5ff14babaea8ac66"} Feb 14 14:20:11 crc kubenswrapper[4750]: I0214 14:20:11.425227 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4fbf10ca-3c0c-4779-b08f-212a47db3302","Type":"ContainerStarted","Data":"73c55fc5e918a06a55434a33575bb5c0dbd6ce4256577cd2ead55420ba64f8b3"} Feb 14 14:20:12 crc kubenswrapper[4750]: I0214 14:20:12.442301 4750 generic.go:334] "Generic (PLEG): container finished" podID="17baaf13-3126-48d1-a32e-522bf2bf43ff" containerID="85b3d4120988dd68182951e523b7ccfa00672ce44acdf9fecdbd8d0d802038b3" exitCode=0 Feb 14 14:20:12 crc kubenswrapper[4750]: I0214 14:20:12.442378 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" event={"ID":"17baaf13-3126-48d1-a32e-522bf2bf43ff","Type":"ContainerDied","Data":"85b3d4120988dd68182951e523b7ccfa00672ce44acdf9fecdbd8d0d802038b3"} Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.055556 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.208415 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-ssh-key-openstack-edpm-ipam\") pod \"17baaf13-3126-48d1-a32e-522bf2bf43ff\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.208698 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-inventory\") pod \"17baaf13-3126-48d1-a32e-522bf2bf43ff\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.208742 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wb84\" (UniqueName: \"kubernetes.io/projected/17baaf13-3126-48d1-a32e-522bf2bf43ff-kube-api-access-8wb84\") pod \"17baaf13-3126-48d1-a32e-522bf2bf43ff\" (UID: \"17baaf13-3126-48d1-a32e-522bf2bf43ff\") " Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.215085 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17baaf13-3126-48d1-a32e-522bf2bf43ff-kube-api-access-8wb84" (OuterVolumeSpecName: "kube-api-access-8wb84") pod "17baaf13-3126-48d1-a32e-522bf2bf43ff" (UID: "17baaf13-3126-48d1-a32e-522bf2bf43ff"). InnerVolumeSpecName "kube-api-access-8wb84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.258702 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "17baaf13-3126-48d1-a32e-522bf2bf43ff" (UID: "17baaf13-3126-48d1-a32e-522bf2bf43ff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.258994 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-inventory" (OuterVolumeSpecName: "inventory") pod "17baaf13-3126-48d1-a32e-522bf2bf43ff" (UID: "17baaf13-3126-48d1-a32e-522bf2bf43ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.312628 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.312660 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wb84\" (UniqueName: \"kubernetes.io/projected/17baaf13-3126-48d1-a32e-522bf2bf43ff-kube-api-access-8wb84\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.312672 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17baaf13-3126-48d1-a32e-522bf2bf43ff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.522247 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.522230 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xtjr7" event={"ID":"17baaf13-3126-48d1-a32e-522bf2bf43ff","Type":"ContainerDied","Data":"aaafb7696dd8fb56ca99ec950388ad3e49e14a5d49727876fa68cdd63810b7b7"} Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.522551 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaafb7696dd8fb56ca99ec950388ad3e49e14a5d49727876fa68cdd63810b7b7" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.559634 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4fbf10ca-3c0c-4779-b08f-212a47db3302","Type":"ContainerStarted","Data":"f8227941c5e1f8b01b8889233f6196c48d128cb20b957074f22db4ead0d37941"} Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.576596 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq"] Feb 14 14:20:14 crc kubenswrapper[4750]: E0214 14:20:14.577079 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17baaf13-3126-48d1-a32e-522bf2bf43ff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.577096 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="17baaf13-3126-48d1-a32e-522bf2bf43ff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.577308 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="17baaf13-3126-48d1-a32e-522bf2bf43ff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.578076 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.589474 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.589688 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.589904 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.590055 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.603864 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq"] Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.733681 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.733718 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj9g8\" (UniqueName: \"kubernetes.io/projected/35713168-58fa-49ee-8783-2631f53b02a9-kube-api-access-kj9g8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.733856 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.733937 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.836314 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.836371 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj9g8\" (UniqueName: \"kubernetes.io/projected/35713168-58fa-49ee-8783-2631f53b02a9-kube-api-access-kj9g8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.836477 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.836579 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.841987 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.842242 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.842743 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:14 crc kubenswrapper[4750]: I0214 14:20:14.856705 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj9g8\" (UniqueName: \"kubernetes.io/projected/35713168-58fa-49ee-8783-2631f53b02a9-kube-api-access-kj9g8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:15 crc kubenswrapper[4750]: I0214 14:20:15.062031 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:20:15 crc kubenswrapper[4750]: I0214 14:20:15.591505 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4fbf10ca-3c0c-4779-b08f-212a47db3302","Type":"ContainerStarted","Data":"c5dfdb4023ba5fb349de785d150000f1351bb4fc0a20fd52bf8839e79c9df436"} Feb 14 14:20:15 crc kubenswrapper[4750]: I0214 14:20:15.671290 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq"] Feb 14 14:20:16 crc kubenswrapper[4750]: W0214 14:20:16.104789 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35713168_58fa_49ee_8783_2631f53b02a9.slice/crio-a229a002818d85786deb311d1d7d061470abd8ff4ae4102afca2b9789b96570a WatchSource:0}: Error finding container a229a002818d85786deb311d1d7d061470abd8ff4ae4102afca2b9789b96570a: Status 404 returned error can't find the container with id a229a002818d85786deb311d1d7d061470abd8ff4ae4102afca2b9789b96570a Feb 14 14:20:16 crc kubenswrapper[4750]: I0214 14:20:16.607495 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" event={"ID":"35713168-58fa-49ee-8783-2631f53b02a9","Type":"ContainerStarted","Data":"a229a002818d85786deb311d1d7d061470abd8ff4ae4102afca2b9789b96570a"} Feb 14 14:20:16 crc kubenswrapper[4750]: I0214 14:20:16.610298 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4fbf10ca-3c0c-4779-b08f-212a47db3302","Type":"ContainerStarted","Data":"06538335c5a81fee49b4541c1e1e8722b31f86407fb6292b3afad7695d331db1"} Feb 14 14:20:16 crc kubenswrapper[4750]: I0214 14:20:16.661657 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.185085422 podStartE2EDuration="7.661633226s" podCreationTimestamp="2026-02-14 14:20:09 +0000 UTC" firstStartedPulling="2026-02-14 14:20:10.659628768 +0000 UTC m=+1682.685618239" lastFinishedPulling="2026-02-14 14:20:16.136176572 +0000 UTC m=+1688.162166043" observedRunningTime="2026-02-14 14:20:16.643008136 +0000 UTC m=+1688.668997627" watchObservedRunningTime="2026-02-14 14:20:16.661633226 +0000 UTC m=+1688.687622717" Feb 14 14:20:17 crc kubenswrapper[4750]: I0214 14:20:17.627771 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" event={"ID":"35713168-58fa-49ee-8783-2631f53b02a9","Type":"ContainerStarted","Data":"1d1887a7de1dd17ebbb92f9f03cce705003658151fdb8daf17138b999ac7fa3b"} Feb 14 14:20:17 crc kubenswrapper[4750]: I0214 14:20:17.660872 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" podStartSLOduration=3.127134693 podStartE2EDuration="3.660853623s" podCreationTimestamp="2026-02-14 14:20:14 +0000 UTC" firstStartedPulling="2026-02-14 14:20:16.133276249 +0000 UTC m=+1688.159265730" lastFinishedPulling="2026-02-14 14:20:16.666995169 +0000 UTC m=+1688.692984660" observedRunningTime="2026-02-14 14:20:17.651003322 +0000 UTC m=+1689.676992803" watchObservedRunningTime="2026-02-14 14:20:17.660853623 +0000 UTC m=+1689.686843104" Feb 14 14:20:21 crc kubenswrapper[4750]: I0214 14:20:21.320848 4750 scope.go:117] "RemoveContainer" containerID="d05a287a847a88519b77183927024a040ba2287ae14883a9996d70aec14b6cd0" Feb 14 14:20:21 crc kubenswrapper[4750]: I0214 14:20:21.352756 4750 scope.go:117] "RemoveContainer" containerID="297905ad9d5ede65642c5b7344a7c596166e22f92ff212e8437b8aebf590aa07" Feb 14 14:20:21 crc kubenswrapper[4750]: I0214 14:20:21.420559 4750 scope.go:117] "RemoveContainer" containerID="d6171091c2782e516f7486079df8af2fcbc9c26f34ce7290d45023050ac124ea" Feb 14 14:20:21 crc kubenswrapper[4750]: I0214 14:20:21.478348 4750 scope.go:117] "RemoveContainer" containerID="3db342e71128ca6e06e6bf30876da25e35205ef9b1172ae15b6c1179f97fe921" Feb 14 14:20:21 crc kubenswrapper[4750]: I0214 14:20:21.531323 4750 scope.go:117] "RemoveContainer" containerID="1c7f3f28eb622effc1329bb6464211b0e2243527f545fd0b7c786ed21343df35" Feb 14 14:20:21 crc kubenswrapper[4750]: I0214 14:20:21.593970 4750 scope.go:117] "RemoveContainer" containerID="1c370397b4b256bfc94c924656276de11640d4f24305b3d390f5a20c6ee99fd7" Feb 14 14:20:23 crc kubenswrapper[4750]: I0214 14:20:23.743087 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:20:23 crc kubenswrapper[4750]: E0214 14:20:23.743722 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:20:35 crc kubenswrapper[4750]: I0214 14:20:35.742909 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:20:35 crc kubenswrapper[4750]: E0214 14:20:35.743828 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:20:40 crc kubenswrapper[4750]: I0214 14:20:40.950558 4750 generic.go:334] "Generic (PLEG): container finished" podID="3f096636-5e6f-428e-8a30-9433d6ac312c" containerID="cbd83ccd6edda259840f3ed773130d53d0d138ed4ea4fe99f632c151ece22699" exitCode=0 Feb 14 14:20:40 crc kubenswrapper[4750]: I0214 14:20:40.950657 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3f096636-5e6f-428e-8a30-9433d6ac312c","Type":"ContainerDied","Data":"cbd83ccd6edda259840f3ed773130d53d0d138ed4ea4fe99f632c151ece22699"} Feb 14 14:20:41 crc kubenswrapper[4750]: I0214 14:20:41.963161 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3f096636-5e6f-428e-8a30-9433d6ac312c","Type":"ContainerStarted","Data":"cce39337b685215759a501f21824a620787fa475fc7711753a6d55d41f01324d"} Feb 14 14:20:41 crc kubenswrapper[4750]: I0214 14:20:41.964615 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 14 14:20:41 crc kubenswrapper[4750]: I0214 14:20:41.997742 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=36.997715834 podStartE2EDuration="36.997715834s" podCreationTimestamp="2026-02-14 14:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:20:41.984300262 +0000 UTC m=+1714.010289743" watchObservedRunningTime="2026-02-14 14:20:41.997715834 +0000 UTC m=+1714.023705345" Feb 14 14:20:49 crc kubenswrapper[4750]: I0214 14:20:49.742242 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:20:49 crc kubenswrapper[4750]: E0214 14:20:49.743420 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:20:55 crc kubenswrapper[4750]: I0214 14:20:55.363382 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 14 14:20:55 crc kubenswrapper[4750]: I0214 14:20:55.447411 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 14:20:59 crc kubenswrapper[4750]: I0214 14:20:59.815863 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerName="rabbitmq" containerID="cri-o://40a62329bd1c2416df33820f0045cacc23b55fea50787e25c159413b6e5b66d4" gracePeriod=604796 Feb 14 14:21:01 crc kubenswrapper[4750]: I0214 14:21:01.742772 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:21:01 crc kubenswrapper[4750]: E0214 14:21:01.743403 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:21:05 crc kubenswrapper[4750]: I0214 14:21:05.172277 4750 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.320526 4750 generic.go:334] "Generic (PLEG): container finished" podID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerID="40a62329bd1c2416df33820f0045cacc23b55fea50787e25c159413b6e5b66d4" exitCode=0 Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.320955 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8775619e-e8b9-4bee-987c-2cba92f6fcf3","Type":"ContainerDied","Data":"40a62329bd1c2416df33820f0045cacc23b55fea50787e25c159413b6e5b66d4"} Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.567658 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.685186 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-erlang-cookie\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.685313 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8775619e-e8b9-4bee-987c-2cba92f6fcf3-erlang-cookie-secret\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.686091 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.686167 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-server-conf\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.686225 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-plugins-conf\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.686274 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-config-data\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.686325 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8775619e-e8b9-4bee-987c-2cba92f6fcf3-pod-info\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.686356 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52v6s\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-kube-api-access-52v6s\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.686396 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-plugins\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.686473 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-tls\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.686501 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-confd\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.690603 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.703614 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.706366 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-kube-api-access-52v6s" (OuterVolumeSpecName: "kube-api-access-52v6s") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "kube-api-access-52v6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.709582 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.716286 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8775619e-e8b9-4bee-987c-2cba92f6fcf3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.729510 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.757476 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8775619e-e8b9-4bee-987c-2cba92f6fcf3-pod-info" (OuterVolumeSpecName: "pod-info") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.780624 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-config-data" (OuterVolumeSpecName: "config-data") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.784411 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-server-conf" (OuterVolumeSpecName: "server-conf") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.797276 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.797304 4750 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8775619e-e8b9-4bee-987c-2cba92f6fcf3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.797313 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52v6s\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-kube-api-access-52v6s\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.797323 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.797330 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.797338 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.797348 4750 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8775619e-e8b9-4bee-987c-2cba92f6fcf3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.797355 4750 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.797363 4750 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8775619e-e8b9-4bee-987c-2cba92f6fcf3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.900135 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5" (OuterVolumeSpecName: "persistence") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "pvc-39e26ca4-2137-4805-b842-1376dd20f9a5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: E0214 14:21:06.900555 4750 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8775619e-e8b9-4bee-987c-2cba92f6fcf3/volumes/kubernetes.io~csi/pvc-39e26ca4-2137-4805-b842-1376dd20f9a5/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8775619e-e8b9-4bee-987c-2cba92f6fcf3/volumes/kubernetes.io~csi/pvc-39e26ca4-2137-4805-b842-1376dd20f9a5/vol_data.json]: open /var/lib/kubelet/pods/8775619e-e8b9-4bee-987c-2cba92f6fcf3/volumes/kubernetes.io~csi/pvc-39e26ca4-2137-4805-b842-1376dd20f9a5/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\" (UID: \"8775619e-e8b9-4bee-987c-2cba92f6fcf3\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8775619e-e8b9-4bee-987c-2cba92f6fcf3/volumes/kubernetes.io~csi/pvc-39e26ca4-2137-4805-b842-1376dd20f9a5/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8775619e-e8b9-4bee-987c-2cba92f6fcf3/volumes/kubernetes.io~csi/pvc-39e26ca4-2137-4805-b842-1376dd20f9a5/vol_data.json]: open /var/lib/kubelet/pods/8775619e-e8b9-4bee-987c-2cba92f6fcf3/volumes/kubernetes.io~csi/pvc-39e26ca4-2137-4805-b842-1376dd20f9a5/vol_data.json: no such file or directory" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.903561 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") on node \"crc\" " Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.939028 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8775619e-e8b9-4bee-987c-2cba92f6fcf3" (UID: "8775619e-e8b9-4bee-987c-2cba92f6fcf3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.964454 4750 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 14:21:06 crc kubenswrapper[4750]: I0214 14:21:06.964758 4750 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-39e26ca4-2137-4805-b842-1376dd20f9a5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5") on node "crc" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.005884 4750 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8775619e-e8b9-4bee-987c-2cba92f6fcf3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.005935 4750 reconciler_common.go:293] "Volume detached for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") on node \"crc\" DevicePath \"\"" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.333516 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8775619e-e8b9-4bee-987c-2cba92f6fcf3","Type":"ContainerDied","Data":"5117d92e1d047bcc4d22aa2f30ad4449f0529bfd2a25ceba69c64a61846f6494"} Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.333568 4750 scope.go:117] "RemoveContainer" containerID="40a62329bd1c2416df33820f0045cacc23b55fea50787e25c159413b6e5b66d4" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.333720 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.362734 4750 scope.go:117] "RemoveContainer" containerID="9199a836dc535c97ea00db338dd3c3eb706b1efaadba65e0b4e87bbe403356f1" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.385504 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.447931 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.498342 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 14:21:07 crc kubenswrapper[4750]: E0214 14:21:07.498890 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerName="setup-container" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.498909 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerName="setup-container" Feb 14 14:21:07 crc kubenswrapper[4750]: E0214 14:21:07.498926 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerName="rabbitmq" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.498933 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerName="rabbitmq" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.499765 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" containerName="rabbitmq" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.501717 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.511868 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.631908 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.631979 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf78a33e-a78a-4048-afa0-af9c27e4425d-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.632019 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf78a33e-a78a-4048-afa0-af9c27e4425d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.632073 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.632095 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf78a33e-a78a-4048-afa0-af9c27e4425d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.632155 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf78a33e-a78a-4048-afa0-af9c27e4425d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.632193 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.632233 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xn6\" (UniqueName: \"kubernetes.io/projected/cf78a33e-a78a-4048-afa0-af9c27e4425d-kube-api-access-45xn6\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.632262 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.632357 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf78a33e-a78a-4048-afa0-af9c27e4425d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.632424 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735012 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf78a33e-a78a-4048-afa0-af9c27e4425d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735106 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735373 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735418 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf78a33e-a78a-4048-afa0-af9c27e4425d-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735443 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf78a33e-a78a-4048-afa0-af9c27e4425d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735483 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735507 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf78a33e-a78a-4048-afa0-af9c27e4425d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735537 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf78a33e-a78a-4048-afa0-af9c27e4425d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735563 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735591 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xn6\" (UniqueName: \"kubernetes.io/projected/cf78a33e-a78a-4048-afa0-af9c27e4425d-kube-api-access-45xn6\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735613 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.735841 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.736507 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.736620 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf78a33e-a78a-4048-afa0-af9c27e4425d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.737413 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf78a33e-a78a-4048-afa0-af9c27e4425d-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.738471 4750 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.738525 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f2ac3bba23e66408833a5e1a27b12337d51f2915a82f5db59eec5106e13b37e5/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.738710 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf78a33e-a78a-4048-afa0-af9c27e4425d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.739662 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf78a33e-a78a-4048-afa0-af9c27e4425d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.742761 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.744207 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf78a33e-a78a-4048-afa0-af9c27e4425d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.749616 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf78a33e-a78a-4048-afa0-af9c27e4425d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.753487 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xn6\" (UniqueName: \"kubernetes.io/projected/cf78a33e-a78a-4048-afa0-af9c27e4425d-kube-api-access-45xn6\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:07 crc kubenswrapper[4750]: I0214 14:21:07.841663 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39e26ca4-2137-4805-b842-1376dd20f9a5\") pod \"rabbitmq-server-0\" (UID: \"cf78a33e-a78a-4048-afa0-af9c27e4425d\") " pod="openstack/rabbitmq-server-0" Feb 14 14:21:08 crc kubenswrapper[4750]: I0214 14:21:08.128925 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 14:21:08 crc kubenswrapper[4750]: I0214 14:21:08.661467 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 14:21:08 crc kubenswrapper[4750]: I0214 14:21:08.762880 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8775619e-e8b9-4bee-987c-2cba92f6fcf3" path="/var/lib/kubelet/pods/8775619e-e8b9-4bee-987c-2cba92f6fcf3/volumes" Feb 14 14:21:09 crc kubenswrapper[4750]: I0214 14:21:09.363280 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf78a33e-a78a-4048-afa0-af9c27e4425d","Type":"ContainerStarted","Data":"ff94ffb28dda402191622d0ddc0bdede9dbc99a7c396af08d46b7a5e3f804b73"} Feb 14 14:21:11 crc kubenswrapper[4750]: I0214 14:21:11.388330 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf78a33e-a78a-4048-afa0-af9c27e4425d","Type":"ContainerStarted","Data":"866e2c88e45f0232cceed5ea73662af434b2bc602046b0a594139b496323ed53"} Feb 14 14:21:16 crc kubenswrapper[4750]: I0214 14:21:16.742622 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:21:16 crc kubenswrapper[4750]: E0214 14:21:16.743619 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:21:21 crc kubenswrapper[4750]: I0214 14:21:21.853644 4750 scope.go:117] "RemoveContainer" containerID="ecdea597626f4832e073c8620cb6f7bbe9818e1e86a740215bf00876be871a7e" Feb 14 14:21:21 crc kubenswrapper[4750]: I0214 14:21:21.914859 4750 scope.go:117] "RemoveContainer" containerID="7f5918ed050a13023e1bd7a13e78da8692eb0e13a13e35789d7defca00b9e0ef" Feb 14 14:21:28 crc kubenswrapper[4750]: I0214 14:21:28.751876 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:21:28 crc kubenswrapper[4750]: E0214 14:21:28.752783 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:21:43 crc kubenswrapper[4750]: I0214 14:21:43.743451 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:21:43 crc kubenswrapper[4750]: E0214 14:21:43.744721 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:21:43 crc kubenswrapper[4750]: I0214 14:21:43.872464 4750 generic.go:334] "Generic (PLEG): container finished" podID="cf78a33e-a78a-4048-afa0-af9c27e4425d" containerID="866e2c88e45f0232cceed5ea73662af434b2bc602046b0a594139b496323ed53" exitCode=0 Feb 14 14:21:43 crc kubenswrapper[4750]: I0214 14:21:43.872512 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf78a33e-a78a-4048-afa0-af9c27e4425d","Type":"ContainerDied","Data":"866e2c88e45f0232cceed5ea73662af434b2bc602046b0a594139b496323ed53"} Feb 14 14:21:44 crc kubenswrapper[4750]: I0214 14:21:44.886396 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf78a33e-a78a-4048-afa0-af9c27e4425d","Type":"ContainerStarted","Data":"ac348927594c3a204068cb3f7bdf7e4de69dc8feb23628cc955bafed5b1b3fa6"} Feb 14 14:21:44 crc kubenswrapper[4750]: I0214 14:21:44.887090 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 14:21:44 crc kubenswrapper[4750]: I0214 14:21:44.914734 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.91470645 podStartE2EDuration="37.91470645s" podCreationTimestamp="2026-02-14 14:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:21:44.910598894 +0000 UTC m=+1776.936588375" watchObservedRunningTime="2026-02-14 14:21:44.91470645 +0000 UTC m=+1776.940695931" Feb 14 14:21:56 crc kubenswrapper[4750]: I0214 14:21:56.746338 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:21:56 crc kubenswrapper[4750]: E0214 14:21:56.747312 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:21:58 crc kubenswrapper[4750]: I0214 14:21:58.132367 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 14:22:08 crc kubenswrapper[4750]: I0214 14:22:08.754350 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:22:08 crc kubenswrapper[4750]: E0214 14:22:08.756468 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:22:22 crc kubenswrapper[4750]: I0214 14:22:22.082543 4750 scope.go:117] "RemoveContainer" containerID="dc4eb70df0864db6c6c5de127a2b2d2ecf0964679372dc96627bb6e89a430442" Feb 14 14:22:22 crc kubenswrapper[4750]: I0214 14:22:22.139858 4750 scope.go:117] "RemoveContainer" containerID="1ebc0634f2a1682681f747e320af0cafc7271be862342adf8c30291df5f00e4a" Feb 14 14:22:22 crc kubenswrapper[4750]: I0214 14:22:22.178270 4750 scope.go:117] "RemoveContainer" containerID="95a0c5ff717fdce072867fdafc844ca0399014a525ac5e0204047083990a755e" Feb 14 14:22:22 crc kubenswrapper[4750]: I0214 14:22:22.742398 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:22:22 crc kubenswrapper[4750]: E0214 14:22:22.742985 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:22:33 crc kubenswrapper[4750]: I0214 14:22:33.743859 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:22:33 crc kubenswrapper[4750]: E0214 14:22:33.745574 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:22:48 crc kubenswrapper[4750]: I0214 14:22:48.753685 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:22:48 crc kubenswrapper[4750]: E0214 14:22:48.754527 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:23:01 crc kubenswrapper[4750]: I0214 14:23:01.742839 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:23:01 crc kubenswrapper[4750]: E0214 14:23:01.743783 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:23:06 crc kubenswrapper[4750]: I0214 14:23:06.992045 4750 generic.go:334] "Generic (PLEG): container finished" podID="35713168-58fa-49ee-8783-2631f53b02a9" containerID="1d1887a7de1dd17ebbb92f9f03cce705003658151fdb8daf17138b999ac7fa3b" exitCode=0 Feb 14 14:23:06 crc kubenswrapper[4750]: I0214 14:23:06.992151 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" event={"ID":"35713168-58fa-49ee-8783-2631f53b02a9","Type":"ContainerDied","Data":"1d1887a7de1dd17ebbb92f9f03cce705003658151fdb8daf17138b999ac7fa3b"} Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.499459 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.577467 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-inventory\") pod \"35713168-58fa-49ee-8783-2631f53b02a9\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.577549 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj9g8\" (UniqueName: \"kubernetes.io/projected/35713168-58fa-49ee-8783-2631f53b02a9-kube-api-access-kj9g8\") pod \"35713168-58fa-49ee-8783-2631f53b02a9\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.577573 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-bootstrap-combined-ca-bundle\") pod \"35713168-58fa-49ee-8783-2631f53b02a9\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.577742 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-ssh-key-openstack-edpm-ipam\") pod \"35713168-58fa-49ee-8783-2631f53b02a9\" (UID: \"35713168-58fa-49ee-8783-2631f53b02a9\") " Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.592075 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35713168-58fa-49ee-8783-2631f53b02a9-kube-api-access-kj9g8" (OuterVolumeSpecName: "kube-api-access-kj9g8") pod "35713168-58fa-49ee-8783-2631f53b02a9" (UID: "35713168-58fa-49ee-8783-2631f53b02a9"). InnerVolumeSpecName "kube-api-access-kj9g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.599543 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "35713168-58fa-49ee-8783-2631f53b02a9" (UID: "35713168-58fa-49ee-8783-2631f53b02a9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.625490 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-inventory" (OuterVolumeSpecName: "inventory") pod "35713168-58fa-49ee-8783-2631f53b02a9" (UID: "35713168-58fa-49ee-8783-2631f53b02a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.644132 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "35713168-58fa-49ee-8783-2631f53b02a9" (UID: "35713168-58fa-49ee-8783-2631f53b02a9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.680766 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.681098 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj9g8\" (UniqueName: \"kubernetes.io/projected/35713168-58fa-49ee-8783-2631f53b02a9-kube-api-access-kj9g8\") on node \"crc\" DevicePath \"\"" Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.681120 4750 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:23:08 crc kubenswrapper[4750]: I0214 14:23:08.681130 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35713168-58fa-49ee-8783-2631f53b02a9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.063442 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" event={"ID":"35713168-58fa-49ee-8783-2631f53b02a9","Type":"ContainerDied","Data":"a229a002818d85786deb311d1d7d061470abd8ff4ae4102afca2b9789b96570a"} Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.063499 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a229a002818d85786deb311d1d7d061470abd8ff4ae4102afca2b9789b96570a" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.063618 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.217189 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8"] Feb 14 14:23:09 crc kubenswrapper[4750]: E0214 14:23:09.218021 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35713168-58fa-49ee-8783-2631f53b02a9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.218050 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="35713168-58fa-49ee-8783-2631f53b02a9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.218387 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="35713168-58fa-49ee-8783-2631f53b02a9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.219545 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.227622 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.227779 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.228145 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.228369 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.234592 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8"] Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.317616 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jx5n\" (UniqueName: \"kubernetes.io/projected/3586119e-2daa-4e61-8c43-8e3a9c455ab5-kube-api-access-7jx5n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.318427 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.318537 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.423687 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.423830 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.423970 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jx5n\" (UniqueName: \"kubernetes.io/projected/3586119e-2daa-4e61-8c43-8e3a9c455ab5-kube-api-access-7jx5n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.429078 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.431557 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.454511 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jx5n\" (UniqueName: \"kubernetes.io/projected/3586119e-2daa-4e61-8c43-8e3a9c455ab5-kube-api-access-7jx5n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:09 crc kubenswrapper[4750]: I0214 14:23:09.547949 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:23:10 crc kubenswrapper[4750]: I0214 14:23:10.194189 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8"] Feb 14 14:23:10 crc kubenswrapper[4750]: W0214 14:23:10.213302 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3586119e_2daa_4e61_8c43_8e3a9c455ab5.slice/crio-b029a8c2506c7c9c76aaf8a45445bc96558e220f2a7bf0410ad0c33a062bd5e3 WatchSource:0}: Error finding container b029a8c2506c7c9c76aaf8a45445bc96558e220f2a7bf0410ad0c33a062bd5e3: Status 404 returned error can't find the container with id b029a8c2506c7c9c76aaf8a45445bc96558e220f2a7bf0410ad0c33a062bd5e3 Feb 14 14:23:11 crc kubenswrapper[4750]: I0214 14:23:11.096186 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" event={"ID":"3586119e-2daa-4e61-8c43-8e3a9c455ab5","Type":"ContainerStarted","Data":"5eec7238ec4f53af212eeac3661a1735230d5aa9f5f20cd811fa7fb7d4858ecb"} Feb 14 14:23:11 crc kubenswrapper[4750]: I0214 14:23:11.097140 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" event={"ID":"3586119e-2daa-4e61-8c43-8e3a9c455ab5","Type":"ContainerStarted","Data":"b029a8c2506c7c9c76aaf8a45445bc96558e220f2a7bf0410ad0c33a062bd5e3"} Feb 14 14:23:11 crc kubenswrapper[4750]: I0214 14:23:11.127744 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" podStartSLOduration=1.63873878 podStartE2EDuration="2.127703015s" podCreationTimestamp="2026-02-14 14:23:09 +0000 UTC" firstStartedPulling="2026-02-14 14:23:10.215073362 +0000 UTC m=+1862.241062863" lastFinishedPulling="2026-02-14 14:23:10.704037577 +0000 UTC m=+1862.730027098" observedRunningTime="2026-02-14 14:23:11.113434649 +0000 UTC m=+1863.139424120" watchObservedRunningTime="2026-02-14 14:23:11.127703015 +0000 UTC m=+1863.153692546" Feb 14 14:23:16 crc kubenswrapper[4750]: I0214 14:23:16.742771 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:23:16 crc kubenswrapper[4750]: E0214 14:23:16.743945 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.290819 4750 scope.go:117] "RemoveContainer" containerID="9c09008851f386a37fbf7ca0119cdf4ad61024dfee3600e57a1e0619609979fc" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.325692 4750 scope.go:117] "RemoveContainer" containerID="8eb9c995aaba8f88bea8e4f07760a7411747e1b92a0f7e403ed56992a693f69d" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.360019 4750 scope.go:117] "RemoveContainer" containerID="c1618f64fa12d7622caa3d7a0e228fc5e59f5bfcf25d88609f6e6cac8e9f12ce" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.393286 4750 scope.go:117] "RemoveContainer" containerID="62b13efd7901f4ac9dcc84a1738c17073a8b6c67679e6cf87d993877d1089fd7" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.422012 4750 scope.go:117] "RemoveContainer" containerID="45a6f95e6656acaddf988afdf64ee79bb28a1cd751acf42d4f0299879c6e05b5" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.456531 4750 scope.go:117] "RemoveContainer" containerID="9986d757abda1880af027b07a03a7891fe24f5f6a0846c7107babb53daed340b" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.492083 4750 scope.go:117] "RemoveContainer" containerID="52d06e7fea7a3a0c977e6fb59c1384d6ac07e2ec1ac41ffa0b5f53ded75f9aa8" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.574692 4750 scope.go:117] "RemoveContainer" containerID="0f3e35aa866da84c3889f74cef2198d4373a34c660419b6d63ff492e8ecac5a0" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.610903 4750 scope.go:117] "RemoveContainer" containerID="c7e458f9a6253ac3969302512df4866f7f3b505bd5f59f6e1b3daebb7174f2b7" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.639507 4750 scope.go:117] "RemoveContainer" containerID="faf2066662005fc0b096e46726c0c9001ea538c0bcb30924aed86e8ddcba769d" Feb 14 14:23:22 crc kubenswrapper[4750]: I0214 14:23:22.669096 4750 scope.go:117] "RemoveContainer" containerID="6dca7009d852793796c0b1875030461e6fd6db3e0e9852f0c8b2701670d6cd2a" Feb 14 14:23:28 crc kubenswrapper[4750]: I0214 14:23:28.758412 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:23:28 crc kubenswrapper[4750]: E0214 14:23:28.759610 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:23:35 crc kubenswrapper[4750]: I0214 14:23:35.043613 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-24be-account-create-update-skxg8"] Feb 14 14:23:35 crc kubenswrapper[4750]: I0214 14:23:35.055301 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-24be-account-create-update-skxg8"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.054853 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3801-account-create-update-vcr52"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.067832 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-sttwb"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.079155 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8mdvr"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.089846 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3801-account-create-update-vcr52"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.100626 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-d1e0-account-create-update-mrqds"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.112331 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-sttwb"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.122936 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bjdpv"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.133048 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8mdvr"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.144002 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bjdpv"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.153548 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-d1e0-account-create-update-mrqds"] Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.759550 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffd32b9-4e66-43e5-b67c-d6b7a9cce266" path="/var/lib/kubelet/pods/0ffd32b9-4e66-43e5-b67c-d6b7a9cce266/volumes" Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.762104 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5a81c3-f760-45a0-aaff-5a5252f1e4ac" path="/var/lib/kubelet/pods/5a5a81c3-f760-45a0-aaff-5a5252f1e4ac/volumes" Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.763732 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="640eb31b-ab86-4684-9b86-9fab7eb5c26a" path="/var/lib/kubelet/pods/640eb31b-ab86-4684-9b86-9fab7eb5c26a/volumes" Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.765478 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a074c53-8712-4bea-a4b9-5af6f3e8daab" path="/var/lib/kubelet/pods/7a074c53-8712-4bea-a4b9-5af6f3e8daab/volumes" Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.767755 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb695f49-57cd-490d-a9ab-208a33015140" path="/var/lib/kubelet/pods/bb695f49-57cd-490d-a9ab-208a33015140/volumes" Feb 14 14:23:36 crc kubenswrapper[4750]: I0214 14:23:36.769056 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce8ba1fe-164b-4f82-a893-aabf47cbb6f3" path="/var/lib/kubelet/pods/ce8ba1fe-164b-4f82-a893-aabf47cbb6f3/volumes" Feb 14 14:23:37 crc kubenswrapper[4750]: I0214 14:23:37.056742 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9e2a-account-create-update-hdhfj"] Feb 14 14:23:37 crc kubenswrapper[4750]: I0214 14:23:37.069709 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-p7pt9"] Feb 14 14:23:37 crc kubenswrapper[4750]: I0214 14:23:37.083613 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9e2a-account-create-update-hdhfj"] Feb 14 14:23:37 crc kubenswrapper[4750]: I0214 14:23:37.094494 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-p7pt9"] Feb 14 14:23:38 crc kubenswrapper[4750]: I0214 14:23:38.760283 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae657046-3a10-4db8-936f-ff96a836579f" path="/var/lib/kubelet/pods/ae657046-3a10-4db8-936f-ff96a836579f/volumes" Feb 14 14:23:38 crc kubenswrapper[4750]: I0214 14:23:38.761354 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e18bd93a-d349-4cc1-86ad-9bd86df1e566" path="/var/lib/kubelet/pods/e18bd93a-d349-4cc1-86ad-9bd86df1e566/volumes" Feb 14 14:23:41 crc kubenswrapper[4750]: I0214 14:23:41.742561 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:23:42 crc kubenswrapper[4750]: I0214 14:23:42.565779 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"affcfbd57c375f679036ebe11e90cd4a9cc2328cc994179256eb16ba5524b032"} Feb 14 14:23:46 crc kubenswrapper[4750]: I0214 14:23:46.037990 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx"] Feb 14 14:23:46 crc kubenswrapper[4750]: I0214 14:23:46.051610 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-bb7vx"] Feb 14 14:23:46 crc kubenswrapper[4750]: I0214 14:23:46.766767 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="441c9e61-7af7-44d7-825f-cab61e06ad3a" path="/var/lib/kubelet/pods/441c9e61-7af7-44d7-825f-cab61e06ad3a/volumes" Feb 14 14:23:47 crc kubenswrapper[4750]: I0214 14:23:47.042592 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-2462-account-create-update-2ctnw"] Feb 14 14:23:47 crc kubenswrapper[4750]: I0214 14:23:47.055225 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-2462-account-create-update-2ctnw"] Feb 14 14:23:48 crc kubenswrapper[4750]: I0214 14:23:48.758151 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c1a2e7-75be-464d-96d5-21957aecf6ae" path="/var/lib/kubelet/pods/29c1a2e7-75be-464d-96d5-21957aecf6ae/volumes" Feb 14 14:24:01 crc kubenswrapper[4750]: I0214 14:24:01.045747 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xg2ph"] Feb 14 14:24:01 crc kubenswrapper[4750]: I0214 14:24:01.058933 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xg2ph"] Feb 14 14:24:02 crc kubenswrapper[4750]: I0214 14:24:02.765605 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc5808f-77c2-4e08-9feb-f8f27081f7c8" path="/var/lib/kubelet/pods/bcc5808f-77c2-4e08-9feb-f8f27081f7c8/volumes" Feb 14 14:24:09 crc kubenswrapper[4750]: I0214 14:24:09.045822 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-r6mf6"] Feb 14 14:24:09 crc kubenswrapper[4750]: I0214 14:24:09.059604 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-r6mf6"] Feb 14 14:24:10 crc kubenswrapper[4750]: I0214 14:24:10.756572 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9f7486-cff2-4a61-8c5e-c71977aab921" path="/var/lib/kubelet/pods/4c9f7486-cff2-4a61-8c5e-c71977aab921/volumes" Feb 14 14:24:12 crc kubenswrapper[4750]: I0214 14:24:12.049691 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-j4kbh"] Feb 14 14:24:12 crc kubenswrapper[4750]: I0214 14:24:12.064088 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-j4kbh"] Feb 14 14:24:12 crc kubenswrapper[4750]: I0214 14:24:12.763190 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f183dd0-25fa-4b38-b6a0-d7f670aa5433" path="/var/lib/kubelet/pods/7f183dd0-25fa-4b38-b6a0-d7f670aa5433/volumes" Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.060304 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-5cf5-account-create-update-s64xm"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.082458 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4dmfv"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.097710 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zz2gv"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.111870 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6534-account-create-update-bsnpb"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.126024 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-5cf5-account-create-update-s64xm"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.140096 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-dfac-account-create-update-lxm7n"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.150041 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4dmfv"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.159852 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6534-account-create-update-bsnpb"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.169312 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zz2gv"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.178615 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-dfac-account-create-update-lxm7n"] Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.763374 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3b2bdb-77bc-4578-adb8-28b3ee6e911d" path="/var/lib/kubelet/pods/0e3b2bdb-77bc-4578-adb8-28b3ee6e911d/volumes" Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.765559 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="188954af-a7be-4dc4-9c84-eb83c5c39d7e" path="/var/lib/kubelet/pods/188954af-a7be-4dc4-9c84-eb83c5c39d7e/volumes" Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.767926 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49101baa-13ea-46fb-8e04-403d10c5e7ce" path="/var/lib/kubelet/pods/49101baa-13ea-46fb-8e04-403d10c5e7ce/volumes" Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.769893 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95df37d4-bdf9-49e2-8286-085cd7975f98" path="/var/lib/kubelet/pods/95df37d4-bdf9-49e2-8286-085cd7975f98/volumes" Feb 14 14:24:14 crc kubenswrapper[4750]: I0214 14:24:14.774620 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27ae574-7796-42fe-854a-8a82e9936e2d" path="/var/lib/kubelet/pods/c27ae574-7796-42fe-854a-8a82e9936e2d/volumes" Feb 14 14:24:16 crc kubenswrapper[4750]: I0214 14:24:16.035632 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hrsll"] Feb 14 14:24:16 crc kubenswrapper[4750]: I0214 14:24:16.050026 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hrsll"] Feb 14 14:24:16 crc kubenswrapper[4750]: I0214 14:24:16.758756 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="409d5a14-70a1-4109-894d-84c2b04e297d" path="/var/lib/kubelet/pods/409d5a14-70a1-4109-894d-84c2b04e297d/volumes" Feb 14 14:24:17 crc kubenswrapper[4750]: I0214 14:24:17.044619 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6232-account-create-update-7gdfp"] Feb 14 14:24:17 crc kubenswrapper[4750]: I0214 14:24:17.064531 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6232-account-create-update-7gdfp"] Feb 14 14:24:18 crc kubenswrapper[4750]: I0214 14:24:18.770616 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e" path="/var/lib/kubelet/pods/6e5dfd6b-b3db-4f9d-9c0b-797cb731d44e/volumes" Feb 14 14:24:22 crc kubenswrapper[4750]: I0214 14:24:22.037418 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-b75n6"] Feb 14 14:24:22 crc kubenswrapper[4750]: I0214 14:24:22.058752 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-b75n6"] Feb 14 14:24:22 crc kubenswrapper[4750]: I0214 14:24:22.766712 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9044d3-72ef-43b1-b353-a02d6a3538b7" path="/var/lib/kubelet/pods/5f9044d3-72ef-43b1-b353-a02d6a3538b7/volumes" Feb 14 14:24:22 crc kubenswrapper[4750]: I0214 14:24:22.864718 4750 scope.go:117] "RemoveContainer" containerID="98069db1af30341072d6aad80f3343c8c2406960fae346c05f58f998c30fa29e" Feb 14 14:24:22 crc kubenswrapper[4750]: I0214 14:24:22.901678 4750 scope.go:117] "RemoveContainer" containerID="5f26d4ca29e13fdadf1a72ef77a23842620ce12a318be0410e8ddbc7206d9d27" Feb 14 14:24:22 crc kubenswrapper[4750]: I0214 14:24:22.968412 4750 scope.go:117] "RemoveContainer" containerID="bb9e9fbda37e7933e06b98dd24aec68c6fae5eb809462e3e532352a24652c21e" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.037871 4750 scope.go:117] "RemoveContainer" containerID="d8c6cbb45bd763e24889e8d28829421fa0053d9bd0e724cd0cb2a41d4371362d" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.072657 4750 scope.go:117] "RemoveContainer" containerID="7d84cb1a9eb74fa44e8910e260f94d4842f5b014fe79a398b4d93c875cbafe36" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.135538 4750 scope.go:117] "RemoveContainer" containerID="6420d01200b8259f0f9877d325bb290d3e674fa5925fd302cabeb88e04a39a8d" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.195599 4750 scope.go:117] "RemoveContainer" containerID="99e75f767f9a85b267711f79f05532a03911ace516e341c7374849148aa09fbb" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.231200 4750 scope.go:117] "RemoveContainer" containerID="dd0601f4671a6b55ef25af865341d59a1d8d068e9982a8db1fa4e42902b24192" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.256222 4750 scope.go:117] "RemoveContainer" containerID="280dc5d6c9a7128284ab012ac69d59a5f12f047c503d5a32ea999c2c535e060e" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.280194 4750 scope.go:117] "RemoveContainer" containerID="49614988fe67cac50432a075c1d79bef4a10f9f712a266398a8f1b9e2efb5b07" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.312917 4750 scope.go:117] "RemoveContainer" containerID="afe0f83fe8f496efd9861f2f71616b04d396a9d50fbadf0a88b6c53b5a8410bd" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.344277 4750 scope.go:117] "RemoveContainer" containerID="63fb58cd16fa2d9fe9099b680abcd9fecaab1bae5536ea7589f5456df45d7604" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.375527 4750 scope.go:117] "RemoveContainer" containerID="fe1e35d0b7cab885f948884d76bc5e2f660adbcba68c0a4ac6f6677f2be1df16" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.399945 4750 scope.go:117] "RemoveContainer" containerID="d3ff0b79d2f1cb2b658be7080370c62ccb4b0996018e0f5a78242a22baf930e0" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.429748 4750 scope.go:117] "RemoveContainer" containerID="00ad5dc7d5e2c3d0f42d55c283270b07316b132fab2955495f0b2fc745aa9b81" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.453636 4750 scope.go:117] "RemoveContainer" containerID="801486fd391c57b47eb5ec550fe3a34d8c5392070940de5b70f7d2cca0817627" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.477920 4750 scope.go:117] "RemoveContainer" containerID="45d2426f9da735749b52cf51ca8bed88984aa43843e07437f894fb765bca151c" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.516656 4750 scope.go:117] "RemoveContainer" containerID="6b08cbfc6b90535c3b07afebb7bf8ed5cc09772f998c7b8ec5eace16eb76c1b0" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.553359 4750 scope.go:117] "RemoveContainer" containerID="4ebe93ac751bed7807ecb03304692224dd4168a252dfe1df194c7456a923caf6" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.588026 4750 scope.go:117] "RemoveContainer" containerID="ce29fdb2ccd3daeea6ab4523989850d1e6927522063615d2a1e59900b60e1bd5" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.624462 4750 scope.go:117] "RemoveContainer" containerID="720066c275090ad483b046df62bafd502b15f2175ceeacfbef11b0d63a7c2419" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.659790 4750 scope.go:117] "RemoveContainer" containerID="8b5dabd35dbf29eb78cac59dec247c589f0f1faba7ab4172934b742fbdf0221b" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.696205 4750 scope.go:117] "RemoveContainer" containerID="fd193fdad1a4ca2212289d54ede52b4f750c3d8a2cb95787f49ddfe4ab5eeab2" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.719147 4750 scope.go:117] "RemoveContainer" containerID="fba75ea03376eb388794d25cdf3194ae3965eaafb9cca925b748fdfc87942649" Feb 14 14:24:23 crc kubenswrapper[4750]: I0214 14:24:23.758385 4750 scope.go:117] "RemoveContainer" containerID="816220762eaf1d0e0fd24f9eb332f773b8eb341ba01ae3368f7b97047fc602e6" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.304210 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g7xkl"] Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.307535 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.326894 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7gn\" (UniqueName: \"kubernetes.io/projected/cb0018be-7214-4c0d-b203-0fd1ca8653e1-kube-api-access-7s7gn\") pod \"redhat-marketplace-g7xkl\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.326987 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-utilities\") pod \"redhat-marketplace-g7xkl\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.327020 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-catalog-content\") pod \"redhat-marketplace-g7xkl\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.329916 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7xkl"] Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.429601 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-utilities\") pod \"redhat-marketplace-g7xkl\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.429686 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-catalog-content\") pod \"redhat-marketplace-g7xkl\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.429986 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7gn\" (UniqueName: \"kubernetes.io/projected/cb0018be-7214-4c0d-b203-0fd1ca8653e1-kube-api-access-7s7gn\") pod \"redhat-marketplace-g7xkl\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.430248 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-utilities\") pod \"redhat-marketplace-g7xkl\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.430273 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-catalog-content\") pod \"redhat-marketplace-g7xkl\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.453689 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7gn\" (UniqueName: \"kubernetes.io/projected/cb0018be-7214-4c0d-b203-0fd1ca8653e1-kube-api-access-7s7gn\") pod \"redhat-marketplace-g7xkl\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:24 crc kubenswrapper[4750]: I0214 14:24:24.629206 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:25 crc kubenswrapper[4750]: W0214 14:24:25.155254 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb0018be_7214_4c0d_b203_0fd1ca8653e1.slice/crio-eb989028f1c1df90e8c27cc2a1a86a71920699610056563067c4264a9ebae510 WatchSource:0}: Error finding container eb989028f1c1df90e8c27cc2a1a86a71920699610056563067c4264a9ebae510: Status 404 returned error can't find the container with id eb989028f1c1df90e8c27cc2a1a86a71920699610056563067c4264a9ebae510 Feb 14 14:24:25 crc kubenswrapper[4750]: I0214 14:24:25.157471 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7xkl"] Feb 14 14:24:25 crc kubenswrapper[4750]: I0214 14:24:25.672406 4750 generic.go:334] "Generic (PLEG): container finished" podID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerID="a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f" exitCode=0 Feb 14 14:24:25 crc kubenswrapper[4750]: I0214 14:24:25.672478 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7xkl" event={"ID":"cb0018be-7214-4c0d-b203-0fd1ca8653e1","Type":"ContainerDied","Data":"a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f"} Feb 14 14:24:25 crc kubenswrapper[4750]: I0214 14:24:25.672639 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7xkl" event={"ID":"cb0018be-7214-4c0d-b203-0fd1ca8653e1","Type":"ContainerStarted","Data":"eb989028f1c1df90e8c27cc2a1a86a71920699610056563067c4264a9ebae510"} Feb 14 14:24:25 crc kubenswrapper[4750]: I0214 14:24:25.675814 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:24:26 crc kubenswrapper[4750]: I0214 14:24:26.688176 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7xkl" event={"ID":"cb0018be-7214-4c0d-b203-0fd1ca8653e1","Type":"ContainerStarted","Data":"dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e"} Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.691788 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x84cl"] Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.695180 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.712591 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x84cl"] Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.811623 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-catalog-content\") pod \"certified-operators-x84cl\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.811802 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-utilities\") pod \"certified-operators-x84cl\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.812032 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j57gt\" (UniqueName: \"kubernetes.io/projected/c666a897-0228-4df7-a74b-ec322ce78c34-kube-api-access-j57gt\") pod \"certified-operators-x84cl\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.914742 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-utilities\") pod \"certified-operators-x84cl\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.914981 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j57gt\" (UniqueName: \"kubernetes.io/projected/c666a897-0228-4df7-a74b-ec322ce78c34-kube-api-access-j57gt\") pod \"certified-operators-x84cl\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.915210 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-catalog-content\") pod \"certified-operators-x84cl\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.915983 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-utilities\") pod \"certified-operators-x84cl\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.916788 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-catalog-content\") pod \"certified-operators-x84cl\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:27 crc kubenswrapper[4750]: I0214 14:24:27.938272 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j57gt\" (UniqueName: \"kubernetes.io/projected/c666a897-0228-4df7-a74b-ec322ce78c34-kube-api-access-j57gt\") pod \"certified-operators-x84cl\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:28 crc kubenswrapper[4750]: I0214 14:24:28.018511 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:28 crc kubenswrapper[4750]: I0214 14:24:28.577863 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x84cl"] Feb 14 14:24:28 crc kubenswrapper[4750]: I0214 14:24:28.717359 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x84cl" event={"ID":"c666a897-0228-4df7-a74b-ec322ce78c34","Type":"ContainerStarted","Data":"037b57720d8e0edd204a769c5d8ceca857dd1cf03041dd1029ad9ce6d6068ef8"} Feb 14 14:24:28 crc kubenswrapper[4750]: I0214 14:24:28.723591 4750 generic.go:334] "Generic (PLEG): container finished" podID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerID="dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e" exitCode=0 Feb 14 14:24:28 crc kubenswrapper[4750]: I0214 14:24:28.723699 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7xkl" event={"ID":"cb0018be-7214-4c0d-b203-0fd1ca8653e1","Type":"ContainerDied","Data":"dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e"} Feb 14 14:24:29 crc kubenswrapper[4750]: I0214 14:24:29.735486 4750 generic.go:334] "Generic (PLEG): container finished" podID="c666a897-0228-4df7-a74b-ec322ce78c34" containerID="8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524" exitCode=0 Feb 14 14:24:29 crc kubenswrapper[4750]: I0214 14:24:29.735559 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x84cl" event={"ID":"c666a897-0228-4df7-a74b-ec322ce78c34","Type":"ContainerDied","Data":"8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524"} Feb 14 14:24:29 crc kubenswrapper[4750]: I0214 14:24:29.739455 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7xkl" event={"ID":"cb0018be-7214-4c0d-b203-0fd1ca8653e1","Type":"ContainerStarted","Data":"5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100"} Feb 14 14:24:29 crc kubenswrapper[4750]: I0214 14:24:29.785817 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g7xkl" podStartSLOduration=2.284047173 podStartE2EDuration="5.78579416s" podCreationTimestamp="2026-02-14 14:24:24 +0000 UTC" firstStartedPulling="2026-02-14 14:24:25.675535755 +0000 UTC m=+1937.701525236" lastFinishedPulling="2026-02-14 14:24:29.177282712 +0000 UTC m=+1941.203272223" observedRunningTime="2026-02-14 14:24:29.775860198 +0000 UTC m=+1941.801849709" watchObservedRunningTime="2026-02-14 14:24:29.78579416 +0000 UTC m=+1941.811783641" Feb 14 14:24:30 crc kubenswrapper[4750]: I0214 14:24:30.765958 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x84cl" event={"ID":"c666a897-0228-4df7-a74b-ec322ce78c34","Type":"ContainerStarted","Data":"1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012"} Feb 14 14:24:32 crc kubenswrapper[4750]: I0214 14:24:32.787590 4750 generic.go:334] "Generic (PLEG): container finished" podID="c666a897-0228-4df7-a74b-ec322ce78c34" containerID="1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012" exitCode=0 Feb 14 14:24:32 crc kubenswrapper[4750]: I0214 14:24:32.787701 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x84cl" event={"ID":"c666a897-0228-4df7-a74b-ec322ce78c34","Type":"ContainerDied","Data":"1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012"} Feb 14 14:24:33 crc kubenswrapper[4750]: I0214 14:24:33.801489 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x84cl" event={"ID":"c666a897-0228-4df7-a74b-ec322ce78c34","Type":"ContainerStarted","Data":"6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088"} Feb 14 14:24:33 crc kubenswrapper[4750]: I0214 14:24:33.826961 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x84cl" podStartSLOduration=3.409822991 podStartE2EDuration="6.826939924s" podCreationTimestamp="2026-02-14 14:24:27 +0000 UTC" firstStartedPulling="2026-02-14 14:24:29.737845766 +0000 UTC m=+1941.763835247" lastFinishedPulling="2026-02-14 14:24:33.154962699 +0000 UTC m=+1945.180952180" observedRunningTime="2026-02-14 14:24:33.818401161 +0000 UTC m=+1945.844390642" watchObservedRunningTime="2026-02-14 14:24:33.826939924 +0000 UTC m=+1945.852929405" Feb 14 14:24:34 crc kubenswrapper[4750]: I0214 14:24:34.629688 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:34 crc kubenswrapper[4750]: I0214 14:24:34.629991 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:35 crc kubenswrapper[4750]: I0214 14:24:35.675012 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g7xkl" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerName="registry-server" probeResult="failure" output=< Feb 14 14:24:35 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:24:35 crc kubenswrapper[4750]: > Feb 14 14:24:38 crc kubenswrapper[4750]: I0214 14:24:38.018830 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:38 crc kubenswrapper[4750]: I0214 14:24:38.019286 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:39 crc kubenswrapper[4750]: I0214 14:24:39.071455 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-x84cl" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" containerName="registry-server" probeResult="failure" output=< Feb 14 14:24:39 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:24:39 crc kubenswrapper[4750]: > Feb 14 14:24:44 crc kubenswrapper[4750]: I0214 14:24:44.708754 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:44 crc kubenswrapper[4750]: I0214 14:24:44.776335 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:44 crc kubenswrapper[4750]: I0214 14:24:44.959284 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7xkl"] Feb 14 14:24:45 crc kubenswrapper[4750]: I0214 14:24:45.949867 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g7xkl" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerName="registry-server" containerID="cri-o://5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100" gracePeriod=2 Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.627132 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.796248 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-catalog-content\") pod \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.796384 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-utilities\") pod \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.796552 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s7gn\" (UniqueName: \"kubernetes.io/projected/cb0018be-7214-4c0d-b203-0fd1ca8653e1-kube-api-access-7s7gn\") pod \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\" (UID: \"cb0018be-7214-4c0d-b203-0fd1ca8653e1\") " Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.799721 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-utilities" (OuterVolumeSpecName: "utilities") pod "cb0018be-7214-4c0d-b203-0fd1ca8653e1" (UID: "cb0018be-7214-4c0d-b203-0fd1ca8653e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.805733 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0018be-7214-4c0d-b203-0fd1ca8653e1-kube-api-access-7s7gn" (OuterVolumeSpecName: "kube-api-access-7s7gn") pod "cb0018be-7214-4c0d-b203-0fd1ca8653e1" (UID: "cb0018be-7214-4c0d-b203-0fd1ca8653e1"). InnerVolumeSpecName "kube-api-access-7s7gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.834772 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb0018be-7214-4c0d-b203-0fd1ca8653e1" (UID: "cb0018be-7214-4c0d-b203-0fd1ca8653e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.905560 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s7gn\" (UniqueName: \"kubernetes.io/projected/cb0018be-7214-4c0d-b203-0fd1ca8653e1-kube-api-access-7s7gn\") on node \"crc\" DevicePath \"\"" Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.905605 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.905630 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0018be-7214-4c0d-b203-0fd1ca8653e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.971334 4750 generic.go:334] "Generic (PLEG): container finished" podID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerID="5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100" exitCode=0 Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.971412 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7xkl" event={"ID":"cb0018be-7214-4c0d-b203-0fd1ca8653e1","Type":"ContainerDied","Data":"5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100"} Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.971445 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7xkl" event={"ID":"cb0018be-7214-4c0d-b203-0fd1ca8653e1","Type":"ContainerDied","Data":"eb989028f1c1df90e8c27cc2a1a86a71920699610056563067c4264a9ebae510"} Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.971461 4750 scope.go:117] "RemoveContainer" containerID="5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100" Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.971601 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7xkl" Feb 14 14:24:46 crc kubenswrapper[4750]: I0214 14:24:46.998665 4750 scope.go:117] "RemoveContainer" containerID="dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e" Feb 14 14:24:47 crc kubenswrapper[4750]: I0214 14:24:47.028144 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7xkl"] Feb 14 14:24:47 crc kubenswrapper[4750]: I0214 14:24:47.041295 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7xkl"] Feb 14 14:24:47 crc kubenswrapper[4750]: I0214 14:24:47.054719 4750 scope.go:117] "RemoveContainer" containerID="a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f" Feb 14 14:24:47 crc kubenswrapper[4750]: I0214 14:24:47.090611 4750 scope.go:117] "RemoveContainer" containerID="5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100" Feb 14 14:24:47 crc kubenswrapper[4750]: E0214 14:24:47.091091 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100\": container with ID starting with 5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100 not found: ID does not exist" containerID="5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100" Feb 14 14:24:47 crc kubenswrapper[4750]: I0214 14:24:47.091146 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100"} err="failed to get container status \"5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100\": rpc error: code = NotFound desc = could not find container \"5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100\": container with ID starting with 5daa3fc4524aa6e3594de8fbc9471301d61d3f4119ed3004a7882b025c2b7100 not found: ID does not exist" Feb 14 14:24:47 crc kubenswrapper[4750]: I0214 14:24:47.091174 4750 scope.go:117] "RemoveContainer" containerID="dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e" Feb 14 14:24:47 crc kubenswrapper[4750]: E0214 14:24:47.091646 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e\": container with ID starting with dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e not found: ID does not exist" containerID="dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e" Feb 14 14:24:47 crc kubenswrapper[4750]: I0214 14:24:47.091756 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e"} err="failed to get container status \"dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e\": rpc error: code = NotFound desc = could not find container \"dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e\": container with ID starting with dde04e071ed40a96712910da044aa5f89e0d332de8cf9920dfffe62dc5aa4b2e not found: ID does not exist" Feb 14 14:24:47 crc kubenswrapper[4750]: I0214 14:24:47.091836 4750 scope.go:117] "RemoveContainer" containerID="a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f" Feb 14 14:24:47 crc kubenswrapper[4750]: E0214 14:24:47.092229 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f\": container with ID starting with a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f not found: ID does not exist" containerID="a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f" Feb 14 14:24:47 crc kubenswrapper[4750]: I0214 14:24:47.092255 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f"} err="failed to get container status \"a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f\": rpc error: code = NotFound desc = could not find container \"a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f\": container with ID starting with a049f56db31d9ba16605ec45b468529d9825696a661c0ea0d86013032859821f not found: ID does not exist" Feb 14 14:24:48 crc kubenswrapper[4750]: I0214 14:24:48.076551 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:48 crc kubenswrapper[4750]: I0214 14:24:48.139554 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:48 crc kubenswrapper[4750]: I0214 14:24:48.761276 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" path="/var/lib/kubelet/pods/cb0018be-7214-4c0d-b203-0fd1ca8653e1/volumes" Feb 14 14:24:49 crc kubenswrapper[4750]: I0214 14:24:49.368518 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x84cl"] Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.004283 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x84cl" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" containerName="registry-server" containerID="cri-o://6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088" gracePeriod=2 Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.553419 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.708383 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-catalog-content\") pod \"c666a897-0228-4df7-a74b-ec322ce78c34\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.708729 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-utilities\") pod \"c666a897-0228-4df7-a74b-ec322ce78c34\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.708858 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j57gt\" (UniqueName: \"kubernetes.io/projected/c666a897-0228-4df7-a74b-ec322ce78c34-kube-api-access-j57gt\") pod \"c666a897-0228-4df7-a74b-ec322ce78c34\" (UID: \"c666a897-0228-4df7-a74b-ec322ce78c34\") " Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.710320 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-utilities" (OuterVolumeSpecName: "utilities") pod "c666a897-0228-4df7-a74b-ec322ce78c34" (UID: "c666a897-0228-4df7-a74b-ec322ce78c34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.715095 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c666a897-0228-4df7-a74b-ec322ce78c34-kube-api-access-j57gt" (OuterVolumeSpecName: "kube-api-access-j57gt") pod "c666a897-0228-4df7-a74b-ec322ce78c34" (UID: "c666a897-0228-4df7-a74b-ec322ce78c34"). InnerVolumeSpecName "kube-api-access-j57gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.781620 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c666a897-0228-4df7-a74b-ec322ce78c34" (UID: "c666a897-0228-4df7-a74b-ec322ce78c34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.811721 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j57gt\" (UniqueName: \"kubernetes.io/projected/c666a897-0228-4df7-a74b-ec322ce78c34-kube-api-access-j57gt\") on node \"crc\" DevicePath \"\"" Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.811756 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:24:50 crc kubenswrapper[4750]: I0214 14:24:50.811765 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c666a897-0228-4df7-a74b-ec322ce78c34-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.015494 4750 generic.go:334] "Generic (PLEG): container finished" podID="c666a897-0228-4df7-a74b-ec322ce78c34" containerID="6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088" exitCode=0 Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.015599 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x84cl" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.015999 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x84cl" event={"ID":"c666a897-0228-4df7-a74b-ec322ce78c34","Type":"ContainerDied","Data":"6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088"} Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.016215 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x84cl" event={"ID":"c666a897-0228-4df7-a74b-ec322ce78c34","Type":"ContainerDied","Data":"037b57720d8e0edd204a769c5d8ceca857dd1cf03041dd1029ad9ce6d6068ef8"} Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.016293 4750 scope.go:117] "RemoveContainer" containerID="6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.051819 4750 scope.go:117] "RemoveContainer" containerID="1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.055854 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x84cl"] Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.065996 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x84cl"] Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.073637 4750 scope.go:117] "RemoveContainer" containerID="8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.129640 4750 scope.go:117] "RemoveContainer" containerID="6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088" Feb 14 14:24:51 crc kubenswrapper[4750]: E0214 14:24:51.130233 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088\": container with ID starting with 6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088 not found: ID does not exist" containerID="6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.130373 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088"} err="failed to get container status \"6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088\": rpc error: code = NotFound desc = could not find container \"6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088\": container with ID starting with 6764dcefe930e280f33eb8f6592bb290c1db774813ae4461294ed058ec384088 not found: ID does not exist" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.130484 4750 scope.go:117] "RemoveContainer" containerID="1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012" Feb 14 14:24:51 crc kubenswrapper[4750]: E0214 14:24:51.130852 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012\": container with ID starting with 1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012 not found: ID does not exist" containerID="1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.130894 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012"} err="failed to get container status \"1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012\": rpc error: code = NotFound desc = could not find container \"1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012\": container with ID starting with 1af8ec22f44f0a7f22f8fe81098b0e41acd4a44cfe5a794c1ff844d1c0e29012 not found: ID does not exist" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.130922 4750 scope.go:117] "RemoveContainer" containerID="8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524" Feb 14 14:24:51 crc kubenswrapper[4750]: E0214 14:24:51.131388 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524\": container with ID starting with 8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524 not found: ID does not exist" containerID="8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524" Feb 14 14:24:51 crc kubenswrapper[4750]: I0214 14:24:51.131509 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524"} err="failed to get container status \"8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524\": rpc error: code = NotFound desc = could not find container \"8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524\": container with ID starting with 8d231f91aa7ea2985b8b293a039e4aa76ba86bbf77656ab3d4be64c11bd45524 not found: ID does not exist" Feb 14 14:24:52 crc kubenswrapper[4750]: I0214 14:24:52.764931 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" path="/var/lib/kubelet/pods/c666a897-0228-4df7-a74b-ec322ce78c34/volumes" Feb 14 14:24:53 crc kubenswrapper[4750]: I0214 14:24:53.046195 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-nksfx"] Feb 14 14:24:53 crc kubenswrapper[4750]: I0214 14:24:53.063661 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-nksfx"] Feb 14 14:24:54 crc kubenswrapper[4750]: I0214 14:24:54.765864 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1af0ef-9644-45bb-9c3f-c05350b180e8" path="/var/lib/kubelet/pods/8f1af0ef-9644-45bb-9c3f-c05350b180e8/volumes" Feb 14 14:24:57 crc kubenswrapper[4750]: I0214 14:24:57.090006 4750 generic.go:334] "Generic (PLEG): container finished" podID="3586119e-2daa-4e61-8c43-8e3a9c455ab5" containerID="5eec7238ec4f53af212eeac3661a1735230d5aa9f5f20cd811fa7fb7d4858ecb" exitCode=0 Feb 14 14:24:57 crc kubenswrapper[4750]: I0214 14:24:57.090087 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" event={"ID":"3586119e-2daa-4e61-8c43-8e3a9c455ab5","Type":"ContainerDied","Data":"5eec7238ec4f53af212eeac3661a1735230d5aa9f5f20cd811fa7fb7d4858ecb"} Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.609804 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.786607 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-inventory\") pod \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.786716 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-ssh-key-openstack-edpm-ipam\") pod \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.787821 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jx5n\" (UniqueName: \"kubernetes.io/projected/3586119e-2daa-4e61-8c43-8e3a9c455ab5-kube-api-access-7jx5n\") pod \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\" (UID: \"3586119e-2daa-4e61-8c43-8e3a9c455ab5\") " Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.793383 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3586119e-2daa-4e61-8c43-8e3a9c455ab5-kube-api-access-7jx5n" (OuterVolumeSpecName: "kube-api-access-7jx5n") pod "3586119e-2daa-4e61-8c43-8e3a9c455ab5" (UID: "3586119e-2daa-4e61-8c43-8e3a9c455ab5"). InnerVolumeSpecName "kube-api-access-7jx5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.818203 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3586119e-2daa-4e61-8c43-8e3a9c455ab5" (UID: "3586119e-2daa-4e61-8c43-8e3a9c455ab5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.827278 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-inventory" (OuterVolumeSpecName: "inventory") pod "3586119e-2daa-4e61-8c43-8e3a9c455ab5" (UID: "3586119e-2daa-4e61-8c43-8e3a9c455ab5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.896252 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.896359 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3586119e-2daa-4e61-8c43-8e3a9c455ab5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:24:58 crc kubenswrapper[4750]: I0214 14:24:58.896382 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jx5n\" (UniqueName: \"kubernetes.io/projected/3586119e-2daa-4e61-8c43-8e3a9c455ab5-kube-api-access-7jx5n\") on node \"crc\" DevicePath \"\"" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.123459 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" event={"ID":"3586119e-2daa-4e61-8c43-8e3a9c455ab5","Type":"ContainerDied","Data":"b029a8c2506c7c9c76aaf8a45445bc96558e220f2a7bf0410ad0c33a062bd5e3"} Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.123535 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b029a8c2506c7c9c76aaf8a45445bc96558e220f2a7bf0410ad0c33a062bd5e3" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.123628 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.250679 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv"] Feb 14 14:24:59 crc kubenswrapper[4750]: E0214 14:24:59.251454 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3586119e-2daa-4e61-8c43-8e3a9c455ab5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.251488 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="3586119e-2daa-4e61-8c43-8e3a9c455ab5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 14:24:59 crc kubenswrapper[4750]: E0214 14:24:59.251507 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" containerName="registry-server" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.251520 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" containerName="registry-server" Feb 14 14:24:59 crc kubenswrapper[4750]: E0214 14:24:59.251566 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerName="extract-content" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.251579 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerName="extract-content" Feb 14 14:24:59 crc kubenswrapper[4750]: E0214 14:24:59.251610 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerName="extract-utilities" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.251622 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerName="extract-utilities" Feb 14 14:24:59 crc kubenswrapper[4750]: E0214 14:24:59.251662 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" containerName="extract-utilities" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.251674 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" containerName="extract-utilities" Feb 14 14:24:59 crc kubenswrapper[4750]: E0214 14:24:59.251710 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerName="registry-server" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.251722 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerName="registry-server" Feb 14 14:24:59 crc kubenswrapper[4750]: E0214 14:24:59.251752 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" containerName="extract-content" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.251764 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" containerName="extract-content" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.252202 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="3586119e-2daa-4e61-8c43-8e3a9c455ab5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.252239 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="c666a897-0228-4df7-a74b-ec322ce78c34" containerName="registry-server" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.252274 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0018be-7214-4c0d-b203-0fd1ca8653e1" containerName="registry-server" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.253690 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.259699 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.260197 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.260478 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.260762 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.265506 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv"] Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.306820 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcv6\" (UniqueName: \"kubernetes.io/projected/bce32f17-aa17-4e19-bc8f-05b9f58cf140-kube-api-access-6jcv6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r86mv\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.307484 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r86mv\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.307538 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r86mv\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.408928 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcv6\" (UniqueName: \"kubernetes.io/projected/bce32f17-aa17-4e19-bc8f-05b9f58cf140-kube-api-access-6jcv6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r86mv\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.409767 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r86mv\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.409966 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r86mv\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.417320 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r86mv\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.417534 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r86mv\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.436721 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcv6\" (UniqueName: \"kubernetes.io/projected/bce32f17-aa17-4e19-bc8f-05b9f58cf140-kube-api-access-6jcv6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r86mv\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:24:59 crc kubenswrapper[4750]: I0214 14:24:59.598486 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:25:00 crc kubenswrapper[4750]: I0214 14:25:00.316347 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv"] Feb 14 14:25:00 crc kubenswrapper[4750]: W0214 14:25:00.322237 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbce32f17_aa17_4e19_bc8f_05b9f58cf140.slice/crio-27ce3fbaac18c743d7b6ca055fffa5960bf3de6d140973f5886913e472ef4dad WatchSource:0}: Error finding container 27ce3fbaac18c743d7b6ca055fffa5960bf3de6d140973f5886913e472ef4dad: Status 404 returned error can't find the container with id 27ce3fbaac18c743d7b6ca055fffa5960bf3de6d140973f5886913e472ef4dad Feb 14 14:25:01 crc kubenswrapper[4750]: I0214 14:25:01.151804 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" event={"ID":"bce32f17-aa17-4e19-bc8f-05b9f58cf140","Type":"ContainerStarted","Data":"4c0437ca38eeada52c1f9acf89e22c678ede9182730bbe7ce645e86f53a08ea3"} Feb 14 14:25:01 crc kubenswrapper[4750]: I0214 14:25:01.152429 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" event={"ID":"bce32f17-aa17-4e19-bc8f-05b9f58cf140","Type":"ContainerStarted","Data":"27ce3fbaac18c743d7b6ca055fffa5960bf3de6d140973f5886913e472ef4dad"} Feb 14 14:25:01 crc kubenswrapper[4750]: I0214 14:25:01.182813 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" podStartSLOduration=1.783395058 podStartE2EDuration="2.182788481s" podCreationTimestamp="2026-02-14 14:24:59 +0000 UTC" firstStartedPulling="2026-02-14 14:25:00.331094318 +0000 UTC m=+1972.357083799" lastFinishedPulling="2026-02-14 14:25:00.730487701 +0000 UTC m=+1972.756477222" observedRunningTime="2026-02-14 14:25:01.180005362 +0000 UTC m=+1973.205994853" watchObservedRunningTime="2026-02-14 14:25:01.182788481 +0000 UTC m=+1973.208777982" Feb 14 14:25:08 crc kubenswrapper[4750]: I0214 14:25:08.044051 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rrlxq"] Feb 14 14:25:08 crc kubenswrapper[4750]: I0214 14:25:08.058850 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rrlxq"] Feb 14 14:25:08 crc kubenswrapper[4750]: I0214 14:25:08.771795 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bbc77b3-5108-40f8-b057-18e305e8f8ba" path="/var/lib/kubelet/pods/3bbc77b3-5108-40f8-b057-18e305e8f8ba/volumes" Feb 14 14:25:09 crc kubenswrapper[4750]: I0214 14:25:09.044482 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x9pkr"] Feb 14 14:25:09 crc kubenswrapper[4750]: I0214 14:25:09.085055 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x9pkr"] Feb 14 14:25:10 crc kubenswrapper[4750]: I0214 14:25:10.768848 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9237972d-c2ac-4ee2-a178-6a750be9c50a" path="/var/lib/kubelet/pods/9237972d-c2ac-4ee2-a178-6a750be9c50a/volumes" Feb 14 14:25:17 crc kubenswrapper[4750]: I0214 14:25:17.055611 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6b4rh"] Feb 14 14:25:17 crc kubenswrapper[4750]: I0214 14:25:17.068864 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6b4rh"] Feb 14 14:25:18 crc kubenswrapper[4750]: I0214 14:25:18.769079 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d67ec74-2a65-494e-a768-5cc6e6714e49" path="/var/lib/kubelet/pods/1d67ec74-2a65-494e-a768-5cc6e6714e49/volumes" Feb 14 14:25:20 crc kubenswrapper[4750]: I0214 14:25:20.034104 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-n6jc5"] Feb 14 14:25:20 crc kubenswrapper[4750]: I0214 14:25:20.047150 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-n6jc5"] Feb 14 14:25:20 crc kubenswrapper[4750]: I0214 14:25:20.756066 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c96aa35-ddbc-4485-ac13-a2f08de1dd28" path="/var/lib/kubelet/pods/6c96aa35-ddbc-4485-ac13-a2f08de1dd28/volumes" Feb 14 14:25:24 crc kubenswrapper[4750]: I0214 14:25:24.991374 4750 scope.go:117] "RemoveContainer" containerID="8bc668725fd238ae39dd420ee5cc6f70476f2ea74225ef1662373364142c1a09" Feb 14 14:25:25 crc kubenswrapper[4750]: I0214 14:25:25.023404 4750 scope.go:117] "RemoveContainer" containerID="a57c412bd66e48323dca61d1d0d8e9450afe348ca7ea6b4b6efe0b1f515b48e0" Feb 14 14:25:25 crc kubenswrapper[4750]: I0214 14:25:25.108628 4750 scope.go:117] "RemoveContainer" containerID="bfd06cca322d29109d5521ce37946eca2879e4407a54df6bb2f3dc3e49fc1cc7" Feb 14 14:25:25 crc kubenswrapper[4750]: I0214 14:25:25.192821 4750 scope.go:117] "RemoveContainer" containerID="33bd9c23feb84763ec68aff067af462814774d271bb4c3e2ac17fd3ba5167564" Feb 14 14:25:25 crc kubenswrapper[4750]: I0214 14:25:25.254816 4750 scope.go:117] "RemoveContainer" containerID="f6014677be4fabec48663f7d6b80b860121f24b4645be3f8d4726c154fa54545" Feb 14 14:26:00 crc kubenswrapper[4750]: I0214 14:26:00.128858 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:26:00 crc kubenswrapper[4750]: I0214 14:26:00.129460 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:26:03 crc kubenswrapper[4750]: I0214 14:26:03.063304 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8f8b-account-create-update-8qtk7"] Feb 14 14:26:03 crc kubenswrapper[4750]: I0214 14:26:03.082437 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8f8b-account-create-update-8qtk7"] Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.033106 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lm7k9"] Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.044377 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9dff-account-create-update-sjzvt"] Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.055313 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-j27zc"] Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.065384 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9dff-account-create-update-sjzvt"] Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.075235 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lm7k9"] Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.084647 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-j27zc"] Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.762077 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075e5fd7-94c0-48b5-9c6d-a7994c4479f7" path="/var/lib/kubelet/pods/075e5fd7-94c0-48b5-9c6d-a7994c4479f7/volumes" Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.762770 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d155439-48d6-4b0b-b096-70e2527f2d7f" path="/var/lib/kubelet/pods/1d155439-48d6-4b0b-b096-70e2527f2d7f/volumes" Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.763858 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c02c14-a300-466f-b53c-8f7a96be136a" path="/var/lib/kubelet/pods/55c02c14-a300-466f-b53c-8f7a96be136a/volumes" Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.764621 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e32c67d8-34c3-4340-b584-704ba0540dc7" path="/var/lib/kubelet/pods/e32c67d8-34c3-4340-b584-704ba0540dc7/volumes" Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.993770 4750 generic.go:334] "Generic (PLEG): container finished" podID="bce32f17-aa17-4e19-bc8f-05b9f58cf140" containerID="4c0437ca38eeada52c1f9acf89e22c678ede9182730bbe7ce645e86f53a08ea3" exitCode=0 Feb 14 14:26:04 crc kubenswrapper[4750]: I0214 14:26:04.993817 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" event={"ID":"bce32f17-aa17-4e19-bc8f-05b9f58cf140","Type":"ContainerDied","Data":"4c0437ca38eeada52c1f9acf89e22c678ede9182730bbe7ce645e86f53a08ea3"} Feb 14 14:26:05 crc kubenswrapper[4750]: I0214 14:26:05.061211 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-zxk9n"] Feb 14 14:26:05 crc kubenswrapper[4750]: I0214 14:26:05.074968 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e02b-account-create-update-gvd4d"] Feb 14 14:26:05 crc kubenswrapper[4750]: I0214 14:26:05.085217 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e02b-account-create-update-gvd4d"] Feb 14 14:26:05 crc kubenswrapper[4750]: I0214 14:26:05.094088 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-zxk9n"] Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.592202 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.748792 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jcv6\" (UniqueName: \"kubernetes.io/projected/bce32f17-aa17-4e19-bc8f-05b9f58cf140-kube-api-access-6jcv6\") pod \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.748840 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-ssh-key-openstack-edpm-ipam\") pod \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.748920 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-inventory\") pod \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\" (UID: \"bce32f17-aa17-4e19-bc8f-05b9f58cf140\") " Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.754395 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce32f17-aa17-4e19-bc8f-05b9f58cf140-kube-api-access-6jcv6" (OuterVolumeSpecName: "kube-api-access-6jcv6") pod "bce32f17-aa17-4e19-bc8f-05b9f58cf140" (UID: "bce32f17-aa17-4e19-bc8f-05b9f58cf140"). InnerVolumeSpecName "kube-api-access-6jcv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.757936 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c038eeb5-80d1-4e94-a5de-31b056dac6f7" path="/var/lib/kubelet/pods/c038eeb5-80d1-4e94-a5de-31b056dac6f7/volumes" Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.758831 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da7aabd1-3708-46b0-a20d-25fb29aa26ef" path="/var/lib/kubelet/pods/da7aabd1-3708-46b0-a20d-25fb29aa26ef/volumes" Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.787933 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bce32f17-aa17-4e19-bc8f-05b9f58cf140" (UID: "bce32f17-aa17-4e19-bc8f-05b9f58cf140"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.796747 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-inventory" (OuterVolumeSpecName: "inventory") pod "bce32f17-aa17-4e19-bc8f-05b9f58cf140" (UID: "bce32f17-aa17-4e19-bc8f-05b9f58cf140"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.852317 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jcv6\" (UniqueName: \"kubernetes.io/projected/bce32f17-aa17-4e19-bc8f-05b9f58cf140-kube-api-access-6jcv6\") on node \"crc\" DevicePath \"\"" Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.852354 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:26:06 crc kubenswrapper[4750]: I0214 14:26:06.852368 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bce32f17-aa17-4e19-bc8f-05b9f58cf140-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.020292 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" event={"ID":"bce32f17-aa17-4e19-bc8f-05b9f58cf140","Type":"ContainerDied","Data":"27ce3fbaac18c743d7b6ca055fffa5960bf3de6d140973f5886913e472ef4dad"} Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.020344 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27ce3fbaac18c743d7b6ca055fffa5960bf3de6d140973f5886913e472ef4dad" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.020406 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r86mv" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.127195 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t"] Feb 14 14:26:07 crc kubenswrapper[4750]: E0214 14:26:07.127799 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce32f17-aa17-4e19-bc8f-05b9f58cf140" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.127817 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce32f17-aa17-4e19-bc8f-05b9f58cf140" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.128259 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce32f17-aa17-4e19-bc8f-05b9f58cf140" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.129224 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.140311 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t"] Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.167302 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.167555 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.167782 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.167917 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.276978 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbnzf\" (UniqueName: \"kubernetes.io/projected/8e6d20f9-b240-426a-8769-6f07bf3f75d4-kube-api-access-fbnzf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.277367 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.277417 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.380715 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.380847 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.381232 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbnzf\" (UniqueName: \"kubernetes.io/projected/8e6d20f9-b240-426a-8769-6f07bf3f75d4-kube-api-access-fbnzf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.384303 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.386490 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.406911 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbnzf\" (UniqueName: \"kubernetes.io/projected/8e6d20f9-b240-426a-8769-6f07bf3f75d4-kube-api-access-fbnzf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:07 crc kubenswrapper[4750]: I0214 14:26:07.498009 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:08 crc kubenswrapper[4750]: I0214 14:26:08.118834 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t"] Feb 14 14:26:08 crc kubenswrapper[4750]: W0214 14:26:08.125935 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6d20f9_b240_426a_8769_6f07bf3f75d4.slice/crio-3d414759fe12a33c1acab1c4cb584604659bb137b0020cc5a23d2fbe3ac40cc9 WatchSource:0}: Error finding container 3d414759fe12a33c1acab1c4cb584604659bb137b0020cc5a23d2fbe3ac40cc9: Status 404 returned error can't find the container with id 3d414759fe12a33c1acab1c4cb584604659bb137b0020cc5a23d2fbe3ac40cc9 Feb 14 14:26:09 crc kubenswrapper[4750]: I0214 14:26:09.041373 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" event={"ID":"8e6d20f9-b240-426a-8769-6f07bf3f75d4","Type":"ContainerStarted","Data":"c64c2f75ee6a1763898ded5b29cdf725bca4593fdc9e05ccd2d67da31d212933"} Feb 14 14:26:09 crc kubenswrapper[4750]: I0214 14:26:09.042029 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" event={"ID":"8e6d20f9-b240-426a-8769-6f07bf3f75d4","Type":"ContainerStarted","Data":"3d414759fe12a33c1acab1c4cb584604659bb137b0020cc5a23d2fbe3ac40cc9"} Feb 14 14:26:09 crc kubenswrapper[4750]: I0214 14:26:09.067896 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" podStartSLOduration=1.62286383 podStartE2EDuration="2.067874282s" podCreationTimestamp="2026-02-14 14:26:07 +0000 UTC" firstStartedPulling="2026-02-14 14:26:08.12843622 +0000 UTC m=+2040.154425701" lastFinishedPulling="2026-02-14 14:26:08.573446662 +0000 UTC m=+2040.599436153" observedRunningTime="2026-02-14 14:26:09.05763164 +0000 UTC m=+2041.083621151" watchObservedRunningTime="2026-02-14 14:26:09.067874282 +0000 UTC m=+2041.093863763" Feb 14 14:26:13 crc kubenswrapper[4750]: E0214 14:26:13.865260 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6d20f9_b240_426a_8769_6f07bf3f75d4.slice/crio-conmon-c64c2f75ee6a1763898ded5b29cdf725bca4593fdc9e05ccd2d67da31d212933.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6d20f9_b240_426a_8769_6f07bf3f75d4.slice/crio-c64c2f75ee6a1763898ded5b29cdf725bca4593fdc9e05ccd2d67da31d212933.scope\": RecentStats: unable to find data in memory cache]" Feb 14 14:26:14 crc kubenswrapper[4750]: I0214 14:26:14.106478 4750 generic.go:334] "Generic (PLEG): container finished" podID="8e6d20f9-b240-426a-8769-6f07bf3f75d4" containerID="c64c2f75ee6a1763898ded5b29cdf725bca4593fdc9e05ccd2d67da31d212933" exitCode=0 Feb 14 14:26:14 crc kubenswrapper[4750]: I0214 14:26:14.106526 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" event={"ID":"8e6d20f9-b240-426a-8769-6f07bf3f75d4","Type":"ContainerDied","Data":"c64c2f75ee6a1763898ded5b29cdf725bca4593fdc9e05ccd2d67da31d212933"} Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.665340 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.798525 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbnzf\" (UniqueName: \"kubernetes.io/projected/8e6d20f9-b240-426a-8769-6f07bf3f75d4-kube-api-access-fbnzf\") pod \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.798801 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-inventory\") pod \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.798821 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-ssh-key-openstack-edpm-ipam\") pod \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\" (UID: \"8e6d20f9-b240-426a-8769-6f07bf3f75d4\") " Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.804419 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6d20f9-b240-426a-8769-6f07bf3f75d4-kube-api-access-fbnzf" (OuterVolumeSpecName: "kube-api-access-fbnzf") pod "8e6d20f9-b240-426a-8769-6f07bf3f75d4" (UID: "8e6d20f9-b240-426a-8769-6f07bf3f75d4"). InnerVolumeSpecName "kube-api-access-fbnzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.838289 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8e6d20f9-b240-426a-8769-6f07bf3f75d4" (UID: "8e6d20f9-b240-426a-8769-6f07bf3f75d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.853328 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-inventory" (OuterVolumeSpecName: "inventory") pod "8e6d20f9-b240-426a-8769-6f07bf3f75d4" (UID: "8e6d20f9-b240-426a-8769-6f07bf3f75d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.903329 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.903367 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e6d20f9-b240-426a-8769-6f07bf3f75d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:26:15 crc kubenswrapper[4750]: I0214 14:26:15.903383 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbnzf\" (UniqueName: \"kubernetes.io/projected/8e6d20f9-b240-426a-8769-6f07bf3f75d4-kube-api-access-fbnzf\") on node \"crc\" DevicePath \"\"" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.130586 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" event={"ID":"8e6d20f9-b240-426a-8769-6f07bf3f75d4","Type":"ContainerDied","Data":"3d414759fe12a33c1acab1c4cb584604659bb137b0020cc5a23d2fbe3ac40cc9"} Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.130661 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d414759fe12a33c1acab1c4cb584604659bb137b0020cc5a23d2fbe3ac40cc9" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.130619 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.310503 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7"] Feb 14 14:26:16 crc kubenswrapper[4750]: E0214 14:26:16.311174 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6d20f9-b240-426a-8769-6f07bf3f75d4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.311193 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6d20f9-b240-426a-8769-6f07bf3f75d4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.311418 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6d20f9-b240-426a-8769-6f07bf3f75d4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.312251 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.315488 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.315674 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.315886 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.316298 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.324755 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7"] Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.415145 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqhr7\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.415234 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqhr7\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.415537 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p94wn\" (UniqueName: \"kubernetes.io/projected/8af7ba28-4efa-4a07-9199-c3c64c043543-kube-api-access-p94wn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqhr7\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.517945 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p94wn\" (UniqueName: \"kubernetes.io/projected/8af7ba28-4efa-4a07-9199-c3c64c043543-kube-api-access-p94wn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqhr7\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.518551 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqhr7\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.519386 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqhr7\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.523015 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqhr7\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.523393 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqhr7\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.539776 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p94wn\" (UniqueName: \"kubernetes.io/projected/8af7ba28-4efa-4a07-9199-c3c64c043543-kube-api-access-p94wn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqhr7\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:16 crc kubenswrapper[4750]: I0214 14:26:16.668158 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:17 crc kubenswrapper[4750]: I0214 14:26:17.245768 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7"] Feb 14 14:26:17 crc kubenswrapper[4750]: W0214 14:26:17.255331 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8af7ba28_4efa_4a07_9199_c3c64c043543.slice/crio-9104e21f497c39935069925039462db104d58276164a7265c51d83c35e17b1b6 WatchSource:0}: Error finding container 9104e21f497c39935069925039462db104d58276164a7265c51d83c35e17b1b6: Status 404 returned error can't find the container with id 9104e21f497c39935069925039462db104d58276164a7265c51d83c35e17b1b6 Feb 14 14:26:18 crc kubenswrapper[4750]: I0214 14:26:18.158526 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" event={"ID":"8af7ba28-4efa-4a07-9199-c3c64c043543","Type":"ContainerStarted","Data":"36790322840bf79928378852149da8b0b56e451e48aa0add79b0718eb1e4f314"} Feb 14 14:26:18 crc kubenswrapper[4750]: I0214 14:26:18.159217 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" event={"ID":"8af7ba28-4efa-4a07-9199-c3c64c043543","Type":"ContainerStarted","Data":"9104e21f497c39935069925039462db104d58276164a7265c51d83c35e17b1b6"} Feb 14 14:26:18 crc kubenswrapper[4750]: I0214 14:26:18.178403 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" podStartSLOduration=1.653994652 podStartE2EDuration="2.178384164s" podCreationTimestamp="2026-02-14 14:26:16 +0000 UTC" firstStartedPulling="2026-02-14 14:26:17.260739583 +0000 UTC m=+2049.286729074" lastFinishedPulling="2026-02-14 14:26:17.785129095 +0000 UTC m=+2049.811118586" observedRunningTime="2026-02-14 14:26:18.17543357 +0000 UTC m=+2050.201423061" watchObservedRunningTime="2026-02-14 14:26:18.178384164 +0000 UTC m=+2050.204373655" Feb 14 14:26:25 crc kubenswrapper[4750]: I0214 14:26:25.584809 4750 scope.go:117] "RemoveContainer" containerID="8424770a65d2fd4a8710e640c1faf114950897a277c4dbf948cf1435c33ff81f" Feb 14 14:26:25 crc kubenswrapper[4750]: I0214 14:26:25.620586 4750 scope.go:117] "RemoveContainer" containerID="0796b0757cbee597c4d0dddf5b2fc2c1e90e21d75e7c9427b8d2ceb6561ab6bb" Feb 14 14:26:25 crc kubenswrapper[4750]: I0214 14:26:25.740591 4750 scope.go:117] "RemoveContainer" containerID="5d9685a5f2ddfcacdc38563153f196d1779f7d1ea51fa422ea1b5715e3277647" Feb 14 14:26:25 crc kubenswrapper[4750]: I0214 14:26:25.775225 4750 scope.go:117] "RemoveContainer" containerID="f1e9bdbacff15ea4d132835b29bff39c95dabcea5765aaf6edde4ebc1334bb8e" Feb 14 14:26:25 crc kubenswrapper[4750]: I0214 14:26:25.823666 4750 scope.go:117] "RemoveContainer" containerID="a67f4c0bf20f873aeaaaeacb4e67be31502ae99c7333e53a6c3c4845b5c1df9c" Feb 14 14:26:25 crc kubenswrapper[4750]: I0214 14:26:25.875019 4750 scope.go:117] "RemoveContainer" containerID="ca67e9c8c7e50b7ccf7685189776accd42b11d6da89818512275123bcbb52ded" Feb 14 14:26:30 crc kubenswrapper[4750]: I0214 14:26:30.128664 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:26:30 crc kubenswrapper[4750]: I0214 14:26:30.129313 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:26:42 crc kubenswrapper[4750]: I0214 14:26:42.061710 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kfvr"] Feb 14 14:26:42 crc kubenswrapper[4750]: I0214 14:26:42.071037 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kfvr"] Feb 14 14:26:42 crc kubenswrapper[4750]: I0214 14:26:42.756926 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf" path="/var/lib/kubelet/pods/d9c6b11e-cb82-4d27-acbd-3ca8ea7b60bf/volumes" Feb 14 14:26:53 crc kubenswrapper[4750]: I0214 14:26:53.578857 4750 generic.go:334] "Generic (PLEG): container finished" podID="8af7ba28-4efa-4a07-9199-c3c64c043543" containerID="36790322840bf79928378852149da8b0b56e451e48aa0add79b0718eb1e4f314" exitCode=0 Feb 14 14:26:53 crc kubenswrapper[4750]: I0214 14:26:53.578946 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" event={"ID":"8af7ba28-4efa-4a07-9199-c3c64c043543","Type":"ContainerDied","Data":"36790322840bf79928378852149da8b0b56e451e48aa0add79b0718eb1e4f314"} Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.043798 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-ggdkj"] Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.056468 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-ggdkj"] Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.067334 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-5c3d-account-create-update-gc5rf"] Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.078028 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-5c3d-account-create-update-gc5rf"] Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.104946 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.182210 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p94wn\" (UniqueName: \"kubernetes.io/projected/8af7ba28-4efa-4a07-9199-c3c64c043543-kube-api-access-p94wn\") pod \"8af7ba28-4efa-4a07-9199-c3c64c043543\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.182289 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-ssh-key-openstack-edpm-ipam\") pod \"8af7ba28-4efa-4a07-9199-c3c64c043543\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.182494 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-inventory\") pod \"8af7ba28-4efa-4a07-9199-c3c64c043543\" (UID: \"8af7ba28-4efa-4a07-9199-c3c64c043543\") " Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.187505 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af7ba28-4efa-4a07-9199-c3c64c043543-kube-api-access-p94wn" (OuterVolumeSpecName: "kube-api-access-p94wn") pod "8af7ba28-4efa-4a07-9199-c3c64c043543" (UID: "8af7ba28-4efa-4a07-9199-c3c64c043543"). InnerVolumeSpecName "kube-api-access-p94wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.221360 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-inventory" (OuterVolumeSpecName: "inventory") pod "8af7ba28-4efa-4a07-9199-c3c64c043543" (UID: "8af7ba28-4efa-4a07-9199-c3c64c043543"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.244100 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8af7ba28-4efa-4a07-9199-c3c64c043543" (UID: "8af7ba28-4efa-4a07-9199-c3c64c043543"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.285107 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.285368 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af7ba28-4efa-4a07-9199-c3c64c043543-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.285383 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p94wn\" (UniqueName: \"kubernetes.io/projected/8af7ba28-4efa-4a07-9199-c3c64c043543-kube-api-access-p94wn\") on node \"crc\" DevicePath \"\"" Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.601439 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" event={"ID":"8af7ba28-4efa-4a07-9199-c3c64c043543","Type":"ContainerDied","Data":"9104e21f497c39935069925039462db104d58276164a7265c51d83c35e17b1b6"} Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.601501 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9104e21f497c39935069925039462db104d58276164a7265c51d83c35e17b1b6" Feb 14 14:26:55 crc kubenswrapper[4750]: I0214 14:26:55.601519 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqhr7" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.224083 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg"] Feb 14 14:26:56 crc kubenswrapper[4750]: E0214 14:26:56.224972 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af7ba28-4efa-4a07-9199-c3c64c043543" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.225000 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af7ba28-4efa-4a07-9199-c3c64c043543" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.225513 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af7ba28-4efa-4a07-9199-c3c64c043543" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.226958 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.229595 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.230197 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.230657 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.237945 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg"] Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.238656 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.309718 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-69zmg\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.309840 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-69zmg\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.309955 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkkbp\" (UniqueName: \"kubernetes.io/projected/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-kube-api-access-xkkbp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-69zmg\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.412274 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkkbp\" (UniqueName: \"kubernetes.io/projected/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-kube-api-access-xkkbp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-69zmg\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.412729 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-69zmg\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.412835 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-69zmg\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.418830 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-69zmg\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.421761 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-69zmg\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.429852 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkkbp\" (UniqueName: \"kubernetes.io/projected/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-kube-api-access-xkkbp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-69zmg\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.566611 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.770055 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2955f853-8f28-4231-8d84-ed81bb9c787e" path="/var/lib/kubelet/pods/2955f853-8f28-4231-8d84-ed81bb9c787e/volumes" Feb 14 14:26:56 crc kubenswrapper[4750]: I0214 14:26:56.770830 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6dc6646-1544-4d25-bcc8-3b269f31b74b" path="/var/lib/kubelet/pods/f6dc6646-1544-4d25-bcc8-3b269f31b74b/volumes" Feb 14 14:26:57 crc kubenswrapper[4750]: I0214 14:26:57.307549 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg"] Feb 14 14:26:57 crc kubenswrapper[4750]: I0214 14:26:57.630128 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" event={"ID":"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34","Type":"ContainerStarted","Data":"377701d7011efeca107052e2bc6329e9a7c27448fac36528a201c6250845e516"} Feb 14 14:26:58 crc kubenswrapper[4750]: I0214 14:26:58.650063 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" event={"ID":"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34","Type":"ContainerStarted","Data":"ca1774efa2d60d0490008ded0705c440f92724263dabeeea5f5290cceb44c80e"} Feb 14 14:26:58 crc kubenswrapper[4750]: I0214 14:26:58.700683 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" podStartSLOduration=2.306218746 podStartE2EDuration="2.700651298s" podCreationTimestamp="2026-02-14 14:26:56 +0000 UTC" firstStartedPulling="2026-02-14 14:26:57.288957929 +0000 UTC m=+2089.314947410" lastFinishedPulling="2026-02-14 14:26:57.683390481 +0000 UTC m=+2089.709379962" observedRunningTime="2026-02-14 14:26:58.679412283 +0000 UTC m=+2090.705401804" watchObservedRunningTime="2026-02-14 14:26:58.700651298 +0000 UTC m=+2090.726640819" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.357440 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8j8wv"] Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.367813 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.369725 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8j8wv"] Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.496325 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-catalog-content\") pod \"redhat-operators-8j8wv\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.496683 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-utilities\") pod \"redhat-operators-8j8wv\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.496992 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj8d5\" (UniqueName: \"kubernetes.io/projected/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-kube-api-access-vj8d5\") pod \"redhat-operators-8j8wv\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.599747 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-utilities\") pod \"redhat-operators-8j8wv\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.599914 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj8d5\" (UniqueName: \"kubernetes.io/projected/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-kube-api-access-vj8d5\") pod \"redhat-operators-8j8wv\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.599989 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-catalog-content\") pod \"redhat-operators-8j8wv\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.600286 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-utilities\") pod \"redhat-operators-8j8wv\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.600596 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-catalog-content\") pod \"redhat-operators-8j8wv\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.619868 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj8d5\" (UniqueName: \"kubernetes.io/projected/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-kube-api-access-vj8d5\") pod \"redhat-operators-8j8wv\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:26:59 crc kubenswrapper[4750]: I0214 14:26:59.716180 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.129127 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.129399 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.129443 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.130395 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"affcfbd57c375f679036ebe11e90cd4a9cc2328cc994179256eb16ba5524b032"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.130465 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://affcfbd57c375f679036ebe11e90cd4a9cc2328cc994179256eb16ba5524b032" gracePeriod=600 Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.199183 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8j8wv"] Feb 14 14:27:00 crc kubenswrapper[4750]: W0214 14:27:00.204981 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ae9ffb9_b591_43bc_aa57_9e280535dd7a.slice/crio-125e985c77cd1a5bdd8091099d7743bcead19961f1f286653dc3834483728924 WatchSource:0}: Error finding container 125e985c77cd1a5bdd8091099d7743bcead19961f1f286653dc3834483728924: Status 404 returned error can't find the container with id 125e985c77cd1a5bdd8091099d7743bcead19961f1f286653dc3834483728924 Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.669946 4750 generic.go:334] "Generic (PLEG): container finished" podID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerID="fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e" exitCode=0 Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.670046 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j8wv" event={"ID":"4ae9ffb9-b591-43bc-aa57-9e280535dd7a","Type":"ContainerDied","Data":"fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e"} Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.670093 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j8wv" event={"ID":"4ae9ffb9-b591-43bc-aa57-9e280535dd7a","Type":"ContainerStarted","Data":"125e985c77cd1a5bdd8091099d7743bcead19961f1f286653dc3834483728924"} Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.675557 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="affcfbd57c375f679036ebe11e90cd4a9cc2328cc994179256eb16ba5524b032" exitCode=0 Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.675594 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"affcfbd57c375f679036ebe11e90cd4a9cc2328cc994179256eb16ba5524b032"} Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.675621 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be"} Feb 14 14:27:00 crc kubenswrapper[4750]: I0214 14:27:00.675637 4750 scope.go:117] "RemoveContainer" containerID="fd012ff208a9d05d9ea3efd810ae3f314103696961dfa4ceeffb75f9cab9a2f4" Feb 14 14:27:02 crc kubenswrapper[4750]: I0214 14:27:02.698714 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j8wv" event={"ID":"4ae9ffb9-b591-43bc-aa57-9e280535dd7a","Type":"ContainerStarted","Data":"341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc"} Feb 14 14:27:06 crc kubenswrapper[4750]: I0214 14:27:06.748851 4750 generic.go:334] "Generic (PLEG): container finished" podID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerID="341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc" exitCode=0 Feb 14 14:27:06 crc kubenswrapper[4750]: I0214 14:27:06.756538 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j8wv" event={"ID":"4ae9ffb9-b591-43bc-aa57-9e280535dd7a","Type":"ContainerDied","Data":"341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc"} Feb 14 14:27:07 crc kubenswrapper[4750]: I0214 14:27:07.765497 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j8wv" event={"ID":"4ae9ffb9-b591-43bc-aa57-9e280535dd7a","Type":"ContainerStarted","Data":"b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b"} Feb 14 14:27:07 crc kubenswrapper[4750]: I0214 14:27:07.795727 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8j8wv" podStartSLOduration=2.309402075 podStartE2EDuration="8.79571221s" podCreationTimestamp="2026-02-14 14:26:59 +0000 UTC" firstStartedPulling="2026-02-14 14:27:00.672477308 +0000 UTC m=+2092.698466799" lastFinishedPulling="2026-02-14 14:27:07.158787443 +0000 UTC m=+2099.184776934" observedRunningTime="2026-02-14 14:27:07.786369414 +0000 UTC m=+2099.812358895" watchObservedRunningTime="2026-02-14 14:27:07.79571221 +0000 UTC m=+2099.821701691" Feb 14 14:27:08 crc kubenswrapper[4750]: I0214 14:27:08.037584 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-72fmt"] Feb 14 14:27:08 crc kubenswrapper[4750]: I0214 14:27:08.050826 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-72fmt"] Feb 14 14:27:08 crc kubenswrapper[4750]: I0214 14:27:08.063281 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-g6dqm"] Feb 14 14:27:08 crc kubenswrapper[4750]: I0214 14:27:08.073535 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-g6dqm"] Feb 14 14:27:08 crc kubenswrapper[4750]: I0214 14:27:08.780903 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f" path="/var/lib/kubelet/pods/7caaaa23-5571-4f5d-8e3b-2722a9f4eb8f/volumes" Feb 14 14:27:08 crc kubenswrapper[4750]: I0214 14:27:08.803294 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866ef186-d207-4fb3-8940-8c27b138480b" path="/var/lib/kubelet/pods/866ef186-d207-4fb3-8940-8c27b138480b/volumes" Feb 14 14:27:09 crc kubenswrapper[4750]: I0214 14:27:09.716618 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:27:09 crc kubenswrapper[4750]: I0214 14:27:09.716914 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:27:10 crc kubenswrapper[4750]: I0214 14:27:10.763857 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8j8wv" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="registry-server" probeResult="failure" output=< Feb 14 14:27:10 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:27:10 crc kubenswrapper[4750]: > Feb 14 14:27:20 crc kubenswrapper[4750]: I0214 14:27:20.773685 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8j8wv" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="registry-server" probeResult="failure" output=< Feb 14 14:27:20 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:27:20 crc kubenswrapper[4750]: > Feb 14 14:27:26 crc kubenswrapper[4750]: I0214 14:27:26.063727 4750 scope.go:117] "RemoveContainer" containerID="555f94b71066c3a86bce6b9fe88fdfd9783dee13be10d4a8bd5808b751256080" Feb 14 14:27:26 crc kubenswrapper[4750]: I0214 14:27:26.101293 4750 scope.go:117] "RemoveContainer" containerID="52ca3a14b634963ed0acfe0eb967f9160c104a19a3db8df86d62fcd1f81eafeb" Feb 14 14:27:26 crc kubenswrapper[4750]: I0214 14:27:26.204389 4750 scope.go:117] "RemoveContainer" containerID="516678059df0b161c311edded47b210243fc0cd95e08650ebdaaaa9803446bec" Feb 14 14:27:26 crc kubenswrapper[4750]: I0214 14:27:26.230763 4750 scope.go:117] "RemoveContainer" containerID="b04b520d013b92f2ac2310030b1be32ac7e64d535f2c74f061749b2bdf0384ca" Feb 14 14:27:26 crc kubenswrapper[4750]: I0214 14:27:26.286614 4750 scope.go:117] "RemoveContainer" containerID="5e7d67ed00dd94f46370db5e9826c4e22d836135b7ea786bd63687adea02195b" Feb 14 14:27:30 crc kubenswrapper[4750]: I0214 14:27:30.794725 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8j8wv" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="registry-server" probeResult="failure" output=< Feb 14 14:27:30 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:27:30 crc kubenswrapper[4750]: > Feb 14 14:27:39 crc kubenswrapper[4750]: I0214 14:27:39.768051 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:27:39 crc kubenswrapper[4750]: I0214 14:27:39.818738 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:27:40 crc kubenswrapper[4750]: I0214 14:27:40.007663 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8j8wv"] Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.143752 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8j8wv" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="registry-server" containerID="cri-o://b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b" gracePeriod=2 Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.664392 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.839569 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-catalog-content\") pod \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.839815 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-utilities\") pod \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.839855 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj8d5\" (UniqueName: \"kubernetes.io/projected/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-kube-api-access-vj8d5\") pod \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\" (UID: \"4ae9ffb9-b591-43bc-aa57-9e280535dd7a\") " Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.841437 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-utilities" (OuterVolumeSpecName: "utilities") pod "4ae9ffb9-b591-43bc-aa57-9e280535dd7a" (UID: "4ae9ffb9-b591-43bc-aa57-9e280535dd7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.846246 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-kube-api-access-vj8d5" (OuterVolumeSpecName: "kube-api-access-vj8d5") pod "4ae9ffb9-b591-43bc-aa57-9e280535dd7a" (UID: "4ae9ffb9-b591-43bc-aa57-9e280535dd7a"). InnerVolumeSpecName "kube-api-access-vj8d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.942459 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.942492 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj8d5\" (UniqueName: \"kubernetes.io/projected/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-kube-api-access-vj8d5\") on node \"crc\" DevicePath \"\"" Feb 14 14:27:41 crc kubenswrapper[4750]: I0214 14:27:41.967358 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ae9ffb9-b591-43bc-aa57-9e280535dd7a" (UID: "4ae9ffb9-b591-43bc-aa57-9e280535dd7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.044406 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae9ffb9-b591-43bc-aa57-9e280535dd7a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.159063 4750 generic.go:334] "Generic (PLEG): container finished" podID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerID="b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b" exitCode=0 Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.159139 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j8wv" event={"ID":"4ae9ffb9-b591-43bc-aa57-9e280535dd7a","Type":"ContainerDied","Data":"b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b"} Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.159207 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j8wv" event={"ID":"4ae9ffb9-b591-43bc-aa57-9e280535dd7a","Type":"ContainerDied","Data":"125e985c77cd1a5bdd8091099d7743bcead19961f1f286653dc3834483728924"} Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.159232 4750 scope.go:117] "RemoveContainer" containerID="b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.159170 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j8wv" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.205058 4750 scope.go:117] "RemoveContainer" containerID="341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.226177 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8j8wv"] Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.250911 4750 scope.go:117] "RemoveContainer" containerID="fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.254312 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8j8wv"] Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.357892 4750 scope.go:117] "RemoveContainer" containerID="b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b" Feb 14 14:27:42 crc kubenswrapper[4750]: E0214 14:27:42.359197 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b\": container with ID starting with b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b not found: ID does not exist" containerID="b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.359228 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b"} err="failed to get container status \"b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b\": rpc error: code = NotFound desc = could not find container \"b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b\": container with ID starting with b974919d35ff53e4709bbb55985bc8ef17b66033f9f4a7a24fa3e2d935b2ef8b not found: ID does not exist" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.359250 4750 scope.go:117] "RemoveContainer" containerID="341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc" Feb 14 14:27:42 crc kubenswrapper[4750]: E0214 14:27:42.360246 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc\": container with ID starting with 341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc not found: ID does not exist" containerID="341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.360268 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc"} err="failed to get container status \"341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc\": rpc error: code = NotFound desc = could not find container \"341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc\": container with ID starting with 341501da3e4b2c5f977baa5ccece6387db969ec51be0b745cebcf733ceb647dc not found: ID does not exist" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.360282 4750 scope.go:117] "RemoveContainer" containerID="fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e" Feb 14 14:27:42 crc kubenswrapper[4750]: E0214 14:27:42.360836 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e\": container with ID starting with fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e not found: ID does not exist" containerID="fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.360872 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e"} err="failed to get container status \"fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e\": rpc error: code = NotFound desc = could not find container \"fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e\": container with ID starting with fa54288539320cc2aeceac68977d931b0b30593223cf51d02411b712b9de390e not found: ID does not exist" Feb 14 14:27:42 crc kubenswrapper[4750]: I0214 14:27:42.754130 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" path="/var/lib/kubelet/pods/4ae9ffb9-b591-43bc-aa57-9e280535dd7a/volumes" Feb 14 14:27:47 crc kubenswrapper[4750]: I0214 14:27:47.214909 4750 generic.go:334] "Generic (PLEG): container finished" podID="afa7c9b6-0f84-49e3-9e1d-667b2ff99d34" containerID="ca1774efa2d60d0490008ded0705c440f92724263dabeeea5f5290cceb44c80e" exitCode=0 Feb 14 14:27:47 crc kubenswrapper[4750]: I0214 14:27:47.215000 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" event={"ID":"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34","Type":"ContainerDied","Data":"ca1774efa2d60d0490008ded0705c440f92724263dabeeea5f5290cceb44c80e"} Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.753428 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.872216 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkkbp\" (UniqueName: \"kubernetes.io/projected/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-kube-api-access-xkkbp\") pod \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.872306 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-ssh-key-openstack-edpm-ipam\") pod \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.872407 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-inventory\") pod \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\" (UID: \"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34\") " Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.880627 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-kube-api-access-xkkbp" (OuterVolumeSpecName: "kube-api-access-xkkbp") pod "afa7c9b6-0f84-49e3-9e1d-667b2ff99d34" (UID: "afa7c9b6-0f84-49e3-9e1d-667b2ff99d34"). InnerVolumeSpecName "kube-api-access-xkkbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.909353 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "afa7c9b6-0f84-49e3-9e1d-667b2ff99d34" (UID: "afa7c9b6-0f84-49e3-9e1d-667b2ff99d34"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.912026 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-inventory" (OuterVolumeSpecName: "inventory") pod "afa7c9b6-0f84-49e3-9e1d-667b2ff99d34" (UID: "afa7c9b6-0f84-49e3-9e1d-667b2ff99d34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.976050 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkkbp\" (UniqueName: \"kubernetes.io/projected/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-kube-api-access-xkkbp\") on node \"crc\" DevicePath \"\"" Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.976093 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:27:48 crc kubenswrapper[4750]: I0214 14:27:48.976123 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa7c9b6-0f84-49e3-9e1d-667b2ff99d34-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.237767 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" event={"ID":"afa7c9b6-0f84-49e3-9e1d-667b2ff99d34","Type":"ContainerDied","Data":"377701d7011efeca107052e2bc6329e9a7c27448fac36528a201c6250845e516"} Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.238007 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377701d7011efeca107052e2bc6329e9a7c27448fac36528a201c6250845e516" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.237880 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-69zmg" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.370161 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8m7xx"] Feb 14 14:27:49 crc kubenswrapper[4750]: E0214 14:27:49.371148 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa7c9b6-0f84-49e3-9e1d-667b2ff99d34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.371167 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa7c9b6-0f84-49e3-9e1d-667b2ff99d34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:27:49 crc kubenswrapper[4750]: E0214 14:27:49.371190 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="extract-utilities" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.371197 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="extract-utilities" Feb 14 14:27:49 crc kubenswrapper[4750]: E0214 14:27:49.371226 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="registry-server" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.371233 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="registry-server" Feb 14 14:27:49 crc kubenswrapper[4750]: E0214 14:27:49.371247 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="extract-content" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.371253 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="extract-content" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.372086 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae9ffb9-b591-43bc-aa57-9e280535dd7a" containerName="registry-server" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.372512 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa7c9b6-0f84-49e3-9e1d-667b2ff99d34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.374777 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.377316 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.378538 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.378795 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.383790 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.415660 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8m7xx"] Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.501645 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdcz\" (UniqueName: \"kubernetes.io/projected/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-kube-api-access-lzdcz\") pod \"ssh-known-hosts-edpm-deployment-8m7xx\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.502185 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8m7xx\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.502305 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8m7xx\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.605148 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8m7xx\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.605316 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8m7xx\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.605461 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzdcz\" (UniqueName: \"kubernetes.io/projected/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-kube-api-access-lzdcz\") pod \"ssh-known-hosts-edpm-deployment-8m7xx\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.610554 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8m7xx\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.610561 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8m7xx\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.633003 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzdcz\" (UniqueName: \"kubernetes.io/projected/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-kube-api-access-lzdcz\") pod \"ssh-known-hosts-edpm-deployment-8m7xx\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:49 crc kubenswrapper[4750]: I0214 14:27:49.700160 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:27:50 crc kubenswrapper[4750]: I0214 14:27:50.314742 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8m7xx"] Feb 14 14:27:51 crc kubenswrapper[4750]: I0214 14:27:51.044848 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4c8dd"] Feb 14 14:27:51 crc kubenswrapper[4750]: I0214 14:27:51.055190 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4c8dd"] Feb 14 14:27:51 crc kubenswrapper[4750]: I0214 14:27:51.260007 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" event={"ID":"483eea7a-81f0-4f0c-92d8-dc0d3f713f10","Type":"ContainerStarted","Data":"9651ff9083f24d9d96c76d17129b39be0132ba08b47b103f8c47aab451308565"} Feb 14 14:27:51 crc kubenswrapper[4750]: I0214 14:27:51.260061 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" event={"ID":"483eea7a-81f0-4f0c-92d8-dc0d3f713f10","Type":"ContainerStarted","Data":"111c8ea99821541285686594a97ca61f6940e6a9993f27cf91170e5bf3609180"} Feb 14 14:27:51 crc kubenswrapper[4750]: I0214 14:27:51.290166 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" podStartSLOduration=1.9118382280000001 podStartE2EDuration="2.290142631s" podCreationTimestamp="2026-02-14 14:27:49 +0000 UTC" firstStartedPulling="2026-02-14 14:27:50.317457792 +0000 UTC m=+2142.343447273" lastFinishedPulling="2026-02-14 14:27:50.695762195 +0000 UTC m=+2142.721751676" observedRunningTime="2026-02-14 14:27:51.283417529 +0000 UTC m=+2143.309407010" watchObservedRunningTime="2026-02-14 14:27:51.290142631 +0000 UTC m=+2143.316132122" Feb 14 14:27:52 crc kubenswrapper[4750]: I0214 14:27:52.760824 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b0dd23-b026-4e72-b924-e6a73fea0c09" path="/var/lib/kubelet/pods/08b0dd23-b026-4e72-b924-e6a73fea0c09/volumes" Feb 14 14:27:58 crc kubenswrapper[4750]: I0214 14:27:58.361595 4750 generic.go:334] "Generic (PLEG): container finished" podID="483eea7a-81f0-4f0c-92d8-dc0d3f713f10" containerID="9651ff9083f24d9d96c76d17129b39be0132ba08b47b103f8c47aab451308565" exitCode=0 Feb 14 14:27:58 crc kubenswrapper[4750]: I0214 14:27:58.361707 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" event={"ID":"483eea7a-81f0-4f0c-92d8-dc0d3f713f10","Type":"ContainerDied","Data":"9651ff9083f24d9d96c76d17129b39be0132ba08b47b103f8c47aab451308565"} Feb 14 14:27:59 crc kubenswrapper[4750]: I0214 14:27:59.949805 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.109491 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-ssh-key-openstack-edpm-ipam\") pod \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.109690 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-inventory-0\") pod \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.109757 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzdcz\" (UniqueName: \"kubernetes.io/projected/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-kube-api-access-lzdcz\") pod \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\" (UID: \"483eea7a-81f0-4f0c-92d8-dc0d3f713f10\") " Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.115366 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-kube-api-access-lzdcz" (OuterVolumeSpecName: "kube-api-access-lzdcz") pod "483eea7a-81f0-4f0c-92d8-dc0d3f713f10" (UID: "483eea7a-81f0-4f0c-92d8-dc0d3f713f10"). InnerVolumeSpecName "kube-api-access-lzdcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.140382 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "483eea7a-81f0-4f0c-92d8-dc0d3f713f10" (UID: "483eea7a-81f0-4f0c-92d8-dc0d3f713f10"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.152537 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "483eea7a-81f0-4f0c-92d8-dc0d3f713f10" (UID: "483eea7a-81f0-4f0c-92d8-dc0d3f713f10"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.212647 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.212680 4750 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.212691 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzdcz\" (UniqueName: \"kubernetes.io/projected/483eea7a-81f0-4f0c-92d8-dc0d3f713f10-kube-api-access-lzdcz\") on node \"crc\" DevicePath \"\"" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.395314 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" event={"ID":"483eea7a-81f0-4f0c-92d8-dc0d3f713f10","Type":"ContainerDied","Data":"111c8ea99821541285686594a97ca61f6940e6a9993f27cf91170e5bf3609180"} Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.395401 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="111c8ea99821541285686594a97ca61f6940e6a9993f27cf91170e5bf3609180" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.395492 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8m7xx" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.490590 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd"] Feb 14 14:28:00 crc kubenswrapper[4750]: E0214 14:28:00.491476 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483eea7a-81f0-4f0c-92d8-dc0d3f713f10" containerName="ssh-known-hosts-edpm-deployment" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.491510 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="483eea7a-81f0-4f0c-92d8-dc0d3f713f10" containerName="ssh-known-hosts-edpm-deployment" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.491944 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="483eea7a-81f0-4f0c-92d8-dc0d3f713f10" containerName="ssh-known-hosts-edpm-deployment" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.493448 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.500990 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.501403 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.501551 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.502163 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.525051 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd"] Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.626317 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txhwt\" (UniqueName: \"kubernetes.io/projected/6ca25e17-509f-40d0-94e1-83db6398669c-kube-api-access-txhwt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g4gfd\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.626508 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g4gfd\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.626602 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g4gfd\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.729252 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txhwt\" (UniqueName: \"kubernetes.io/projected/6ca25e17-509f-40d0-94e1-83db6398669c-kube-api-access-txhwt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g4gfd\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.729627 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g4gfd\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.729704 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g4gfd\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.734506 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g4gfd\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.734961 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g4gfd\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.745329 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txhwt\" (UniqueName: \"kubernetes.io/projected/6ca25e17-509f-40d0-94e1-83db6398669c-kube-api-access-txhwt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g4gfd\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:00 crc kubenswrapper[4750]: I0214 14:28:00.830017 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:01 crc kubenswrapper[4750]: I0214 14:28:01.208147 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd"] Feb 14 14:28:01 crc kubenswrapper[4750]: I0214 14:28:01.406321 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" event={"ID":"6ca25e17-509f-40d0-94e1-83db6398669c","Type":"ContainerStarted","Data":"a8fefc7dfa056dae71f8a5c5eccb183e1ff0088eff934ee0e5301db930fc762f"} Feb 14 14:28:02 crc kubenswrapper[4750]: I0214 14:28:02.432466 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" event={"ID":"6ca25e17-509f-40d0-94e1-83db6398669c","Type":"ContainerStarted","Data":"b792ca64624b87f9c6e7637aa80bebb78ea41771518a04bc932879158bc7ea35"} Feb 14 14:28:02 crc kubenswrapper[4750]: I0214 14:28:02.469442 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" podStartSLOduration=2.072841051 podStartE2EDuration="2.469412493s" podCreationTimestamp="2026-02-14 14:28:00 +0000 UTC" firstStartedPulling="2026-02-14 14:28:01.207278432 +0000 UTC m=+2153.233267913" lastFinishedPulling="2026-02-14 14:28:01.603849874 +0000 UTC m=+2153.629839355" observedRunningTime="2026-02-14 14:28:02.46264997 +0000 UTC m=+2154.488639451" watchObservedRunningTime="2026-02-14 14:28:02.469412493 +0000 UTC m=+2154.495401984" Feb 14 14:28:10 crc kubenswrapper[4750]: I0214 14:28:10.550497 4750 generic.go:334] "Generic (PLEG): container finished" podID="6ca25e17-509f-40d0-94e1-83db6398669c" containerID="b792ca64624b87f9c6e7637aa80bebb78ea41771518a04bc932879158bc7ea35" exitCode=0 Feb 14 14:28:10 crc kubenswrapper[4750]: I0214 14:28:10.550573 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" event={"ID":"6ca25e17-509f-40d0-94e1-83db6398669c","Type":"ContainerDied","Data":"b792ca64624b87f9c6e7637aa80bebb78ea41771518a04bc932879158bc7ea35"} Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.209304 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.350621 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-ssh-key-openstack-edpm-ipam\") pod \"6ca25e17-509f-40d0-94e1-83db6398669c\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.351004 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-inventory\") pod \"6ca25e17-509f-40d0-94e1-83db6398669c\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.351168 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txhwt\" (UniqueName: \"kubernetes.io/projected/6ca25e17-509f-40d0-94e1-83db6398669c-kube-api-access-txhwt\") pod \"6ca25e17-509f-40d0-94e1-83db6398669c\" (UID: \"6ca25e17-509f-40d0-94e1-83db6398669c\") " Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.358077 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca25e17-509f-40d0-94e1-83db6398669c-kube-api-access-txhwt" (OuterVolumeSpecName: "kube-api-access-txhwt") pod "6ca25e17-509f-40d0-94e1-83db6398669c" (UID: "6ca25e17-509f-40d0-94e1-83db6398669c"). InnerVolumeSpecName "kube-api-access-txhwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.407867 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6ca25e17-509f-40d0-94e1-83db6398669c" (UID: "6ca25e17-509f-40d0-94e1-83db6398669c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.427833 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-inventory" (OuterVolumeSpecName: "inventory") pod "6ca25e17-509f-40d0-94e1-83db6398669c" (UID: "6ca25e17-509f-40d0-94e1-83db6398669c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.455607 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.455656 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca25e17-509f-40d0-94e1-83db6398669c-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.455671 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txhwt\" (UniqueName: \"kubernetes.io/projected/6ca25e17-509f-40d0-94e1-83db6398669c-kube-api-access-txhwt\") on node \"crc\" DevicePath \"\"" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.585633 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" event={"ID":"6ca25e17-509f-40d0-94e1-83db6398669c","Type":"ContainerDied","Data":"a8fefc7dfa056dae71f8a5c5eccb183e1ff0088eff934ee0e5301db930fc762f"} Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.585687 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8fefc7dfa056dae71f8a5c5eccb183e1ff0088eff934ee0e5301db930fc762f" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.585758 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g4gfd" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.672875 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4"] Feb 14 14:28:12 crc kubenswrapper[4750]: E0214 14:28:12.673499 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca25e17-509f-40d0-94e1-83db6398669c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.673521 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca25e17-509f-40d0-94e1-83db6398669c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.673825 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca25e17-509f-40d0-94e1-83db6398669c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.674833 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.678901 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.679249 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.679306 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.681777 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.686822 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4"] Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.766391 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-726m4\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.766500 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-726m4\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.766633 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxq6q\" (UniqueName: \"kubernetes.io/projected/2636faba-c74f-47a7-8a7c-eb14094ab50b-kube-api-access-fxq6q\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-726m4\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.867455 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-726m4\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.867850 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxq6q\" (UniqueName: \"kubernetes.io/projected/2636faba-c74f-47a7-8a7c-eb14094ab50b-kube-api-access-fxq6q\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-726m4\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.867976 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-726m4\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.871483 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-726m4\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.872139 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-726m4\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.883137 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxq6q\" (UniqueName: \"kubernetes.io/projected/2636faba-c74f-47a7-8a7c-eb14094ab50b-kube-api-access-fxq6q\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-726m4\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:12 crc kubenswrapper[4750]: I0214 14:28:12.990868 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:13 crc kubenswrapper[4750]: I0214 14:28:13.601406 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4"] Feb 14 14:28:14 crc kubenswrapper[4750]: I0214 14:28:14.619975 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" event={"ID":"2636faba-c74f-47a7-8a7c-eb14094ab50b","Type":"ContainerStarted","Data":"1d02e76e386c5c1dca81432d387914fe59f7e8b0af7f173b51cef880cb6e989f"} Feb 14 14:28:14 crc kubenswrapper[4750]: I0214 14:28:14.620732 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" event={"ID":"2636faba-c74f-47a7-8a7c-eb14094ab50b","Type":"ContainerStarted","Data":"6fcdf9c45b4e25ee60e09e6e283708e9f1d5faefb6c997b1930435c7dfcf0b62"} Feb 14 14:28:14 crc kubenswrapper[4750]: I0214 14:28:14.645213 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" podStartSLOduration=2.066175682 podStartE2EDuration="2.64518699s" podCreationTimestamp="2026-02-14 14:28:12 +0000 UTC" firstStartedPulling="2026-02-14 14:28:13.61169299 +0000 UTC m=+2165.637682491" lastFinishedPulling="2026-02-14 14:28:14.190704288 +0000 UTC m=+2166.216693799" observedRunningTime="2026-02-14 14:28:14.644704696 +0000 UTC m=+2166.670694177" watchObservedRunningTime="2026-02-14 14:28:14.64518699 +0000 UTC m=+2166.671176481" Feb 14 14:28:24 crc kubenswrapper[4750]: I0214 14:28:24.779182 4750 generic.go:334] "Generic (PLEG): container finished" podID="2636faba-c74f-47a7-8a7c-eb14094ab50b" containerID="1d02e76e386c5c1dca81432d387914fe59f7e8b0af7f173b51cef880cb6e989f" exitCode=0 Feb 14 14:28:24 crc kubenswrapper[4750]: I0214 14:28:24.779287 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" event={"ID":"2636faba-c74f-47a7-8a7c-eb14094ab50b","Type":"ContainerDied","Data":"1d02e76e386c5c1dca81432d387914fe59f7e8b0af7f173b51cef880cb6e989f"} Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.391664 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.444062 4750 scope.go:117] "RemoveContainer" containerID="840b14f06790bd205d04be74420fa875e0829b2b0edb5e725bf4b3f4cc8bb918" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.467881 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-ssh-key-openstack-edpm-ipam\") pod \"2636faba-c74f-47a7-8a7c-eb14094ab50b\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.468189 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-inventory\") pod \"2636faba-c74f-47a7-8a7c-eb14094ab50b\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.468795 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxq6q\" (UniqueName: \"kubernetes.io/projected/2636faba-c74f-47a7-8a7c-eb14094ab50b-kube-api-access-fxq6q\") pod \"2636faba-c74f-47a7-8a7c-eb14094ab50b\" (UID: \"2636faba-c74f-47a7-8a7c-eb14094ab50b\") " Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.474386 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2636faba-c74f-47a7-8a7c-eb14094ab50b-kube-api-access-fxq6q" (OuterVolumeSpecName: "kube-api-access-fxq6q") pod "2636faba-c74f-47a7-8a7c-eb14094ab50b" (UID: "2636faba-c74f-47a7-8a7c-eb14094ab50b"). InnerVolumeSpecName "kube-api-access-fxq6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.506385 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-inventory" (OuterVolumeSpecName: "inventory") pod "2636faba-c74f-47a7-8a7c-eb14094ab50b" (UID: "2636faba-c74f-47a7-8a7c-eb14094ab50b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.508195 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2636faba-c74f-47a7-8a7c-eb14094ab50b" (UID: "2636faba-c74f-47a7-8a7c-eb14094ab50b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.572415 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.572463 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2636faba-c74f-47a7-8a7c-eb14094ab50b-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.572478 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxq6q\" (UniqueName: \"kubernetes.io/projected/2636faba-c74f-47a7-8a7c-eb14094ab50b-kube-api-access-fxq6q\") on node \"crc\" DevicePath \"\"" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.803750 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" event={"ID":"2636faba-c74f-47a7-8a7c-eb14094ab50b","Type":"ContainerDied","Data":"6fcdf9c45b4e25ee60e09e6e283708e9f1d5faefb6c997b1930435c7dfcf0b62"} Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.803798 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fcdf9c45b4e25ee60e09e6e283708e9f1d5faefb6c997b1930435c7dfcf0b62" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.803840 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-726m4" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.888986 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6"] Feb 14 14:28:26 crc kubenswrapper[4750]: E0214 14:28:26.889656 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2636faba-c74f-47a7-8a7c-eb14094ab50b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.889683 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2636faba-c74f-47a7-8a7c-eb14094ab50b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.890023 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2636faba-c74f-47a7-8a7c-eb14094ab50b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.891014 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.892985 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.893276 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.894237 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.894788 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.894870 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.895086 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.895280 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.894803 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.895670 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.920778 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6"] Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.988470 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.988566 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.988621 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.988734 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.988763 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.988789 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.988840 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.988865 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.988983 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.989013 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.989078 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.989124 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.989164 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.989218 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.989292 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pptzg\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-kube-api-access-pptzg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:26 crc kubenswrapper[4750]: I0214 14:28:26.989322 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.091287 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.091683 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092197 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092406 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092467 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092516 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092575 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092630 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092710 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092754 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092796 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092864 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092913 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.092958 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.093039 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pptzg\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-kube-api-access-pptzg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.093105 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.096137 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.096497 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.098275 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.098476 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.098604 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.098619 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.099172 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.099750 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.100608 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.101021 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.102061 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.105041 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.108441 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.108450 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.111218 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.115652 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pptzg\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-kube-api-access-pptzg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.208648 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:28:27 crc kubenswrapper[4750]: I0214 14:28:27.866525 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6"] Feb 14 14:28:28 crc kubenswrapper[4750]: I0214 14:28:28.838019 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" event={"ID":"a0fe9116-89eb-49c2-a659-2dfdfe1c885a","Type":"ContainerStarted","Data":"ba499ac1fcef4df1562be0e49d865158db28cbec048442374907cb8222d62dd0"} Feb 14 14:28:28 crc kubenswrapper[4750]: I0214 14:28:28.838414 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" event={"ID":"a0fe9116-89eb-49c2-a659-2dfdfe1c885a","Type":"ContainerStarted","Data":"4ef1eec8e253ba7af57965c7216792b69591ebf3a4a639de40d6b4ce5322f309"} Feb 14 14:28:28 crc kubenswrapper[4750]: I0214 14:28:28.867884 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" podStartSLOduration=2.386579661 podStartE2EDuration="2.867863316s" podCreationTimestamp="2026-02-14 14:28:26 +0000 UTC" firstStartedPulling="2026-02-14 14:28:27.87139831 +0000 UTC m=+2179.897387791" lastFinishedPulling="2026-02-14 14:28:28.352681965 +0000 UTC m=+2180.378671446" observedRunningTime="2026-02-14 14:28:28.858081478 +0000 UTC m=+2180.884070999" watchObservedRunningTime="2026-02-14 14:28:28.867863316 +0000 UTC m=+2180.893852807" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.345903 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-prcvg"] Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.350618 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.362169 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-catalog-content\") pod \"community-operators-prcvg\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.362213 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p52\" (UniqueName: \"kubernetes.io/projected/33225998-ab00-4ad9-b6e3-cf409ac113e1-kube-api-access-72p52\") pod \"community-operators-prcvg\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.362285 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-utilities\") pod \"community-operators-prcvg\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.366555 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prcvg"] Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.464089 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72p52\" (UniqueName: \"kubernetes.io/projected/33225998-ab00-4ad9-b6e3-cf409ac113e1-kube-api-access-72p52\") pod \"community-operators-prcvg\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.464161 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-catalog-content\") pod \"community-operators-prcvg\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.464275 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-utilities\") pod \"community-operators-prcvg\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.464682 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-catalog-content\") pod \"community-operators-prcvg\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.464889 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-utilities\") pod \"community-operators-prcvg\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.487383 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72p52\" (UniqueName: \"kubernetes.io/projected/33225998-ab00-4ad9-b6e3-cf409ac113e1-kube-api-access-72p52\") pod \"community-operators-prcvg\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:49 crc kubenswrapper[4750]: I0214 14:28:49.681680 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:50 crc kubenswrapper[4750]: W0214 14:28:50.225898 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33225998_ab00_4ad9_b6e3_cf409ac113e1.slice/crio-a90deca71e444b2bb7ca66300b24fa530bd4e7136e770965cad49ba5b7d2bf9a WatchSource:0}: Error finding container a90deca71e444b2bb7ca66300b24fa530bd4e7136e770965cad49ba5b7d2bf9a: Status 404 returned error can't find the container with id a90deca71e444b2bb7ca66300b24fa530bd4e7136e770965cad49ba5b7d2bf9a Feb 14 14:28:50 crc kubenswrapper[4750]: I0214 14:28:50.229308 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prcvg"] Feb 14 14:28:51 crc kubenswrapper[4750]: I0214 14:28:51.129002 4750 generic.go:334] "Generic (PLEG): container finished" podID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerID="6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2" exitCode=0 Feb 14 14:28:51 crc kubenswrapper[4750]: I0214 14:28:51.129127 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prcvg" event={"ID":"33225998-ab00-4ad9-b6e3-cf409ac113e1","Type":"ContainerDied","Data":"6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2"} Feb 14 14:28:51 crc kubenswrapper[4750]: I0214 14:28:51.129683 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prcvg" event={"ID":"33225998-ab00-4ad9-b6e3-cf409ac113e1","Type":"ContainerStarted","Data":"a90deca71e444b2bb7ca66300b24fa530bd4e7136e770965cad49ba5b7d2bf9a"} Feb 14 14:28:52 crc kubenswrapper[4750]: I0214 14:28:52.145728 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prcvg" event={"ID":"33225998-ab00-4ad9-b6e3-cf409ac113e1","Type":"ContainerStarted","Data":"7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922"} Feb 14 14:28:54 crc kubenswrapper[4750]: I0214 14:28:54.179941 4750 generic.go:334] "Generic (PLEG): container finished" podID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerID="7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922" exitCode=0 Feb 14 14:28:54 crc kubenswrapper[4750]: I0214 14:28:54.180069 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prcvg" event={"ID":"33225998-ab00-4ad9-b6e3-cf409ac113e1","Type":"ContainerDied","Data":"7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922"} Feb 14 14:28:55 crc kubenswrapper[4750]: I0214 14:28:55.196339 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prcvg" event={"ID":"33225998-ab00-4ad9-b6e3-cf409ac113e1","Type":"ContainerStarted","Data":"76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed"} Feb 14 14:28:55 crc kubenswrapper[4750]: I0214 14:28:55.237174 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-prcvg" podStartSLOduration=2.756036678 podStartE2EDuration="6.237142512s" podCreationTimestamp="2026-02-14 14:28:49 +0000 UTC" firstStartedPulling="2026-02-14 14:28:51.130873539 +0000 UTC m=+2203.156863040" lastFinishedPulling="2026-02-14 14:28:54.611979383 +0000 UTC m=+2206.637968874" observedRunningTime="2026-02-14 14:28:55.221086064 +0000 UTC m=+2207.247075625" watchObservedRunningTime="2026-02-14 14:28:55.237142512 +0000 UTC m=+2207.263132023" Feb 14 14:28:59 crc kubenswrapper[4750]: I0214 14:28:59.683298 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:28:59 crc kubenswrapper[4750]: I0214 14:28:59.684911 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:29:00 crc kubenswrapper[4750]: I0214 14:29:00.128663 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:29:00 crc kubenswrapper[4750]: I0214 14:29:00.128729 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:29:00 crc kubenswrapper[4750]: I0214 14:29:00.731178 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-prcvg" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerName="registry-server" probeResult="failure" output=< Feb 14 14:29:00 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:29:00 crc kubenswrapper[4750]: > Feb 14 14:29:09 crc kubenswrapper[4750]: I0214 14:29:09.755724 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:29:09 crc kubenswrapper[4750]: I0214 14:29:09.813640 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:29:10 crc kubenswrapper[4750]: I0214 14:29:10.007916 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prcvg"] Feb 14 14:29:11 crc kubenswrapper[4750]: I0214 14:29:11.386216 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-prcvg" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerName="registry-server" containerID="cri-o://76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed" gracePeriod=2 Feb 14 14:29:11 crc kubenswrapper[4750]: I0214 14:29:11.911032 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.072284 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72p52\" (UniqueName: \"kubernetes.io/projected/33225998-ab00-4ad9-b6e3-cf409ac113e1-kube-api-access-72p52\") pod \"33225998-ab00-4ad9-b6e3-cf409ac113e1\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.072381 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-catalog-content\") pod \"33225998-ab00-4ad9-b6e3-cf409ac113e1\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.072575 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-utilities\") pod \"33225998-ab00-4ad9-b6e3-cf409ac113e1\" (UID: \"33225998-ab00-4ad9-b6e3-cf409ac113e1\") " Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.073351 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-utilities" (OuterVolumeSpecName: "utilities") pod "33225998-ab00-4ad9-b6e3-cf409ac113e1" (UID: "33225998-ab00-4ad9-b6e3-cf409ac113e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.081322 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33225998-ab00-4ad9-b6e3-cf409ac113e1-kube-api-access-72p52" (OuterVolumeSpecName: "kube-api-access-72p52") pod "33225998-ab00-4ad9-b6e3-cf409ac113e1" (UID: "33225998-ab00-4ad9-b6e3-cf409ac113e1"). InnerVolumeSpecName "kube-api-access-72p52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.150573 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33225998-ab00-4ad9-b6e3-cf409ac113e1" (UID: "33225998-ab00-4ad9-b6e3-cf409ac113e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.176379 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.176438 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72p52\" (UniqueName: \"kubernetes.io/projected/33225998-ab00-4ad9-b6e3-cf409ac113e1-kube-api-access-72p52\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.176459 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33225998-ab00-4ad9-b6e3-cf409ac113e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.402687 4750 generic.go:334] "Generic (PLEG): container finished" podID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerID="76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed" exitCode=0 Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.402749 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prcvg" event={"ID":"33225998-ab00-4ad9-b6e3-cf409ac113e1","Type":"ContainerDied","Data":"76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed"} Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.402807 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prcvg" event={"ID":"33225998-ab00-4ad9-b6e3-cf409ac113e1","Type":"ContainerDied","Data":"a90deca71e444b2bb7ca66300b24fa530bd4e7136e770965cad49ba5b7d2bf9a"} Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.402835 4750 scope.go:117] "RemoveContainer" containerID="76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.405222 4750 generic.go:334] "Generic (PLEG): container finished" podID="a0fe9116-89eb-49c2-a659-2dfdfe1c885a" containerID="ba499ac1fcef4df1562be0e49d865158db28cbec048442374907cb8222d62dd0" exitCode=0 Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.405257 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" event={"ID":"a0fe9116-89eb-49c2-a659-2dfdfe1c885a","Type":"ContainerDied","Data":"ba499ac1fcef4df1562be0e49d865158db28cbec048442374907cb8222d62dd0"} Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.406050 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prcvg" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.446440 4750 scope.go:117] "RemoveContainer" containerID="7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.472937 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prcvg"] Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.477679 4750 scope.go:117] "RemoveContainer" containerID="6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.484561 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-prcvg"] Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.546472 4750 scope.go:117] "RemoveContainer" containerID="76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed" Feb 14 14:29:12 crc kubenswrapper[4750]: E0214 14:29:12.547538 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed\": container with ID starting with 76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed not found: ID does not exist" containerID="76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.547575 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed"} err="failed to get container status \"76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed\": rpc error: code = NotFound desc = could not find container \"76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed\": container with ID starting with 76184dfcbfc8ffbe888af94115c0e67501dd1b59c3e58f03ee6e5b92ad9501ed not found: ID does not exist" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.547599 4750 scope.go:117] "RemoveContainer" containerID="7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922" Feb 14 14:29:12 crc kubenswrapper[4750]: E0214 14:29:12.547883 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922\": container with ID starting with 7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922 not found: ID does not exist" containerID="7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.547983 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922"} err="failed to get container status \"7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922\": rpc error: code = NotFound desc = could not find container \"7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922\": container with ID starting with 7bede29d52506de07e3580bf8a291c00c5828e46a65673a9f136d7d7a8b3c922 not found: ID does not exist" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.548017 4750 scope.go:117] "RemoveContainer" containerID="6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2" Feb 14 14:29:12 crc kubenswrapper[4750]: E0214 14:29:12.548394 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2\": container with ID starting with 6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2 not found: ID does not exist" containerID="6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.548442 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2"} err="failed to get container status \"6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2\": rpc error: code = NotFound desc = could not find container \"6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2\": container with ID starting with 6cd7dfeee3e85d6bdb81977e6623a5738a33f3a83f99dd0178f3213c060c7fb2 not found: ID does not exist" Feb 14 14:29:12 crc kubenswrapper[4750]: I0214 14:29:12.767023 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" path="/var/lib/kubelet/pods/33225998-ab00-4ad9-b6e3-cf409ac113e1/volumes" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.442338 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" event={"ID":"a0fe9116-89eb-49c2-a659-2dfdfe1c885a","Type":"ContainerDied","Data":"4ef1eec8e253ba7af57965c7216792b69591ebf3a4a639de40d6b4ce5322f309"} Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.442623 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ef1eec8e253ba7af57965c7216792b69591ebf3a4a639de40d6b4ce5322f309" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.515242 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.544614 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.544715 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-nova-combined-ca-bundle\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.544746 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.544794 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ssh-key-openstack-edpm-ipam\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.544878 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-neutron-metadata-combined-ca-bundle\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.544954 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.544981 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pptzg\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-kube-api-access-pptzg\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.545005 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-power-monitoring-combined-ca-bundle\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.545097 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-inventory\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.545162 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ovn-combined-ca-bundle\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.545289 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-repo-setup-combined-ca-bundle\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.545335 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-bootstrap-combined-ca-bundle\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.545388 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.545426 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-libvirt-combined-ca-bundle\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.545455 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-combined-ca-bundle\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.545492 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\" (UID: \"a0fe9116-89eb-49c2-a659-2dfdfe1c885a\") " Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.557658 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.562067 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-kube-api-access-pptzg" (OuterVolumeSpecName: "kube-api-access-pptzg") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "kube-api-access-pptzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.563147 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.563984 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.564027 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.569287 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.573570 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.574975 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.575002 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.575214 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.575251 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.575277 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.575931 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.578616 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.610194 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-inventory" (OuterVolumeSpecName: "inventory") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.616044 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a0fe9116-89eb-49c2-a659-2dfdfe1c885a" (UID: "a0fe9116-89eb-49c2-a659-2dfdfe1c885a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649048 4750 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649099 4750 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649137 4750 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649151 4750 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649168 4750 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649184 4750 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649197 4750 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649209 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649222 4750 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649235 4750 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649250 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pptzg\" (UniqueName: \"kubernetes.io/projected/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-kube-api-access-pptzg\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649263 4750 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649279 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649293 4750 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649308 4750 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:14 crc kubenswrapper[4750]: I0214 14:29:14.649319 4750 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fe9116-89eb-49c2-a659-2dfdfe1c885a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.455578 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.770624 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g"] Feb 14 14:29:15 crc kubenswrapper[4750]: E0214 14:29:15.773792 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerName="extract-utilities" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.773825 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerName="extract-utilities" Feb 14 14:29:15 crc kubenswrapper[4750]: E0214 14:29:15.773876 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerName="registry-server" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.773887 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerName="registry-server" Feb 14 14:29:15 crc kubenswrapper[4750]: E0214 14:29:15.773958 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerName="extract-content" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.773967 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerName="extract-content" Feb 14 14:29:15 crc kubenswrapper[4750]: E0214 14:29:15.774032 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fe9116-89eb-49c2-a659-2dfdfe1c885a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.774043 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fe9116-89eb-49c2-a659-2dfdfe1c885a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.774889 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0fe9116-89eb-49c2-a659-2dfdfe1c885a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.774959 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="33225998-ab00-4ad9-b6e3-cf409ac113e1" containerName="registry-server" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.776837 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.784541 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.785076 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.785379 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.785625 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.785813 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.799698 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g"] Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.886927 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.887143 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28nt2\" (UniqueName: \"kubernetes.io/projected/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-kube-api-access-28nt2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.887910 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.888010 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.888089 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.990638 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.990735 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.990789 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.992688 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.992945 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.993688 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28nt2\" (UniqueName: \"kubernetes.io/projected/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-kube-api-access-28nt2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.996163 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:15 crc kubenswrapper[4750]: I0214 14:29:15.996586 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:16 crc kubenswrapper[4750]: I0214 14:29:16.007438 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:16 crc kubenswrapper[4750]: I0214 14:29:16.014721 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28nt2\" (UniqueName: \"kubernetes.io/projected/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-kube-api-access-28nt2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cdx5g\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:16 crc kubenswrapper[4750]: I0214 14:29:16.115421 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:29:16 crc kubenswrapper[4750]: W0214 14:29:16.744517 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d98d1d6_d8c3_4b5c_b848_afceef7706f4.slice/crio-b2a4f2b0b519d5c7b9ab62b8eb6c9bc85176d969409fed075350b4fa662e373e WatchSource:0}: Error finding container b2a4f2b0b519d5c7b9ab62b8eb6c9bc85176d969409fed075350b4fa662e373e: Status 404 returned error can't find the container with id b2a4f2b0b519d5c7b9ab62b8eb6c9bc85176d969409fed075350b4fa662e373e Feb 14 14:29:16 crc kubenswrapper[4750]: I0214 14:29:16.762100 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g"] Feb 14 14:29:17 crc kubenswrapper[4750]: I0214 14:29:17.473990 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" event={"ID":"2d98d1d6-d8c3-4b5c-b848-afceef7706f4","Type":"ContainerStarted","Data":"b2a4f2b0b519d5c7b9ab62b8eb6c9bc85176d969409fed075350b4fa662e373e"} Feb 14 14:29:18 crc kubenswrapper[4750]: I0214 14:29:18.484403 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" event={"ID":"2d98d1d6-d8c3-4b5c-b848-afceef7706f4","Type":"ContainerStarted","Data":"f6e5c1e97ba14b484febce770de04d9584b52ab80b2a72bfeb60c2e8f4702985"} Feb 14 14:29:18 crc kubenswrapper[4750]: I0214 14:29:18.508101 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" podStartSLOduration=3.092444306 podStartE2EDuration="3.508082547s" podCreationTimestamp="2026-02-14 14:29:15 +0000 UTC" firstStartedPulling="2026-02-14 14:29:16.746678611 +0000 UTC m=+2228.772668102" lastFinishedPulling="2026-02-14 14:29:17.162316852 +0000 UTC m=+2229.188306343" observedRunningTime="2026-02-14 14:29:18.500583783 +0000 UTC m=+2230.526573264" watchObservedRunningTime="2026-02-14 14:29:18.508082547 +0000 UTC m=+2230.534072028" Feb 14 14:29:23 crc kubenswrapper[4750]: I0214 14:29:23.065669 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-zrqpc"] Feb 14 14:29:23 crc kubenswrapper[4750]: I0214 14:29:23.080803 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-zrqpc"] Feb 14 14:29:24 crc kubenswrapper[4750]: I0214 14:29:24.765857 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15aeb180-f3b4-46a0-847c-5befbf340b51" path="/var/lib/kubelet/pods/15aeb180-f3b4-46a0-847c-5befbf340b51/volumes" Feb 14 14:29:26 crc kubenswrapper[4750]: I0214 14:29:26.544539 4750 scope.go:117] "RemoveContainer" containerID="a7f9965ab1ae6fcb5e4a95f0e24fe14c198796121536c79296e4669d819ad326" Feb 14 14:29:30 crc kubenswrapper[4750]: I0214 14:29:30.129443 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:29:30 crc kubenswrapper[4750]: I0214 14:29:30.130029 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.128683 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.129323 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.129377 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.130254 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.130343 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" gracePeriod=600 Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.167924 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b"] Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.170389 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.174909 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.179787 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.180242 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b"] Feb 14 14:30:00 crc kubenswrapper[4750]: E0214 14:30:00.250720 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.325007 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/213a52b1-d668-4878-a752-4254ae6e6534-config-volume\") pod \"collect-profiles-29517990-n9c7b\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.325071 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/213a52b1-d668-4878-a752-4254ae6e6534-secret-volume\") pod \"collect-profiles-29517990-n9c7b\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.325265 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tsmz\" (UniqueName: \"kubernetes.io/projected/213a52b1-d668-4878-a752-4254ae6e6534-kube-api-access-7tsmz\") pod \"collect-profiles-29517990-n9c7b\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.427449 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/213a52b1-d668-4878-a752-4254ae6e6534-config-volume\") pod \"collect-profiles-29517990-n9c7b\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.427533 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/213a52b1-d668-4878-a752-4254ae6e6534-secret-volume\") pod \"collect-profiles-29517990-n9c7b\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.427611 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tsmz\" (UniqueName: \"kubernetes.io/projected/213a52b1-d668-4878-a752-4254ae6e6534-kube-api-access-7tsmz\") pod \"collect-profiles-29517990-n9c7b\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.428419 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/213a52b1-d668-4878-a752-4254ae6e6534-config-volume\") pod \"collect-profiles-29517990-n9c7b\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.434766 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/213a52b1-d668-4878-a752-4254ae6e6534-secret-volume\") pod \"collect-profiles-29517990-n9c7b\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.446253 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tsmz\" (UniqueName: \"kubernetes.io/projected/213a52b1-d668-4878-a752-4254ae6e6534-kube-api-access-7tsmz\") pod \"collect-profiles-29517990-n9c7b\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.501620 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:00 crc kubenswrapper[4750]: I0214 14:30:00.997808 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b"] Feb 14 14:30:01 crc kubenswrapper[4750]: I0214 14:30:01.207606 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" exitCode=0 Feb 14 14:30:01 crc kubenswrapper[4750]: I0214 14:30:01.207699 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be"} Feb 14 14:30:01 crc kubenswrapper[4750]: I0214 14:30:01.208055 4750 scope.go:117] "RemoveContainer" containerID="affcfbd57c375f679036ebe11e90cd4a9cc2328cc994179256eb16ba5524b032" Feb 14 14:30:01 crc kubenswrapper[4750]: I0214 14:30:01.208884 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:30:01 crc kubenswrapper[4750]: E0214 14:30:01.209242 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:30:01 crc kubenswrapper[4750]: I0214 14:30:01.211410 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" event={"ID":"213a52b1-d668-4878-a752-4254ae6e6534","Type":"ContainerStarted","Data":"61d6940c897cc02f885c3b9a548d36f2d4bffffee47ff13655a57ef59871511f"} Feb 14 14:30:01 crc kubenswrapper[4750]: I0214 14:30:01.211448 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" event={"ID":"213a52b1-d668-4878-a752-4254ae6e6534","Type":"ContainerStarted","Data":"3d37b2a46d14134303d407d25e9004001161c2eca899dd8c01faf694e23e0073"} Feb 14 14:30:01 crc kubenswrapper[4750]: I0214 14:30:01.263338 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" podStartSLOduration=1.26331981 podStartE2EDuration="1.26331981s" podCreationTimestamp="2026-02-14 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 14:30:01.252965856 +0000 UTC m=+2273.278955367" watchObservedRunningTime="2026-02-14 14:30:01.26331981 +0000 UTC m=+2273.289309291" Feb 14 14:30:02 crc kubenswrapper[4750]: I0214 14:30:02.230209 4750 generic.go:334] "Generic (PLEG): container finished" podID="213a52b1-d668-4878-a752-4254ae6e6534" containerID="61d6940c897cc02f885c3b9a548d36f2d4bffffee47ff13655a57ef59871511f" exitCode=0 Feb 14 14:30:02 crc kubenswrapper[4750]: I0214 14:30:02.230299 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" event={"ID":"213a52b1-d668-4878-a752-4254ae6e6534","Type":"ContainerDied","Data":"61d6940c897cc02f885c3b9a548d36f2d4bffffee47ff13655a57ef59871511f"} Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.051132 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-qpbmh"] Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.062855 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-qpbmh"] Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.640183 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.842207 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/213a52b1-d668-4878-a752-4254ae6e6534-secret-volume\") pod \"213a52b1-d668-4878-a752-4254ae6e6534\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.842362 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/213a52b1-d668-4878-a752-4254ae6e6534-config-volume\") pod \"213a52b1-d668-4878-a752-4254ae6e6534\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.842443 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tsmz\" (UniqueName: \"kubernetes.io/projected/213a52b1-d668-4878-a752-4254ae6e6534-kube-api-access-7tsmz\") pod \"213a52b1-d668-4878-a752-4254ae6e6534\" (UID: \"213a52b1-d668-4878-a752-4254ae6e6534\") " Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.843321 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213a52b1-d668-4878-a752-4254ae6e6534-config-volume" (OuterVolumeSpecName: "config-volume") pod "213a52b1-d668-4878-a752-4254ae6e6534" (UID: "213a52b1-d668-4878-a752-4254ae6e6534"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.847441 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/213a52b1-d668-4878-a752-4254ae6e6534-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "213a52b1-d668-4878-a752-4254ae6e6534" (UID: "213a52b1-d668-4878-a752-4254ae6e6534"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.848973 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213a52b1-d668-4878-a752-4254ae6e6534-kube-api-access-7tsmz" (OuterVolumeSpecName: "kube-api-access-7tsmz") pod "213a52b1-d668-4878-a752-4254ae6e6534" (UID: "213a52b1-d668-4878-a752-4254ae6e6534"). InnerVolumeSpecName "kube-api-access-7tsmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.945496 4750 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/213a52b1-d668-4878-a752-4254ae6e6534-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.945537 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/213a52b1-d668-4878-a752-4254ae6e6534-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 14:30:03 crc kubenswrapper[4750]: I0214 14:30:03.945549 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tsmz\" (UniqueName: \"kubernetes.io/projected/213a52b1-d668-4878-a752-4254ae6e6534-kube-api-access-7tsmz\") on node \"crc\" DevicePath \"\"" Feb 14 14:30:04 crc kubenswrapper[4750]: I0214 14:30:04.256887 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" event={"ID":"213a52b1-d668-4878-a752-4254ae6e6534","Type":"ContainerDied","Data":"3d37b2a46d14134303d407d25e9004001161c2eca899dd8c01faf694e23e0073"} Feb 14 14:30:04 crc kubenswrapper[4750]: I0214 14:30:04.256949 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d37b2a46d14134303d407d25e9004001161c2eca899dd8c01faf694e23e0073" Feb 14 14:30:04 crc kubenswrapper[4750]: I0214 14:30:04.257017 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b" Feb 14 14:30:04 crc kubenswrapper[4750]: I0214 14:30:04.710919 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb"] Feb 14 14:30:04 crc kubenswrapper[4750]: I0214 14:30:04.722372 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517945-fwbrb"] Feb 14 14:30:04 crc kubenswrapper[4750]: I0214 14:30:04.760507 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bce0a3c-19ee-4338-bed6-63e2a01bf3de" path="/var/lib/kubelet/pods/5bce0a3c-19ee-4338-bed6-63e2a01bf3de/volumes" Feb 14 14:30:04 crc kubenswrapper[4750]: I0214 14:30:04.761975 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78cf262a-619b-4edd-bffb-55e5d454b23b" path="/var/lib/kubelet/pods/78cf262a-619b-4edd-bffb-55e5d454b23b/volumes" Feb 14 14:30:15 crc kubenswrapper[4750]: I0214 14:30:15.742750 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:30:15 crc kubenswrapper[4750]: E0214 14:30:15.743999 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:30:19 crc kubenswrapper[4750]: I0214 14:30:19.461040 4750 generic.go:334] "Generic (PLEG): container finished" podID="2d98d1d6-d8c3-4b5c-b848-afceef7706f4" containerID="f6e5c1e97ba14b484febce770de04d9584b52ab80b2a72bfeb60c2e8f4702985" exitCode=0 Feb 14 14:30:19 crc kubenswrapper[4750]: I0214 14:30:19.461148 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" event={"ID":"2d98d1d6-d8c3-4b5c-b848-afceef7706f4","Type":"ContainerDied","Data":"f6e5c1e97ba14b484febce770de04d9584b52ab80b2a72bfeb60c2e8f4702985"} Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.019707 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.157581 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ssh-key-openstack-edpm-ipam\") pod \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.157830 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovncontroller-config-0\") pod \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.157942 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovn-combined-ca-bundle\") pod \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.158031 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-inventory\") pod \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.158143 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28nt2\" (UniqueName: \"kubernetes.io/projected/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-kube-api-access-28nt2\") pod \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\" (UID: \"2d98d1d6-d8c3-4b5c-b848-afceef7706f4\") " Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.175946 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2d98d1d6-d8c3-4b5c-b848-afceef7706f4" (UID: "2d98d1d6-d8c3-4b5c-b848-afceef7706f4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.183413 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-kube-api-access-28nt2" (OuterVolumeSpecName: "kube-api-access-28nt2") pod "2d98d1d6-d8c3-4b5c-b848-afceef7706f4" (UID: "2d98d1d6-d8c3-4b5c-b848-afceef7706f4"). InnerVolumeSpecName "kube-api-access-28nt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.191189 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "2d98d1d6-d8c3-4b5c-b848-afceef7706f4" (UID: "2d98d1d6-d8c3-4b5c-b848-afceef7706f4"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.194656 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d98d1d6-d8c3-4b5c-b848-afceef7706f4" (UID: "2d98d1d6-d8c3-4b5c-b848-afceef7706f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.195846 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-inventory" (OuterVolumeSpecName: "inventory") pod "2d98d1d6-d8c3-4b5c-b848-afceef7706f4" (UID: "2d98d1d6-d8c3-4b5c-b848-afceef7706f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.264217 4750 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.264789 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.265072 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28nt2\" (UniqueName: \"kubernetes.io/projected/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-kube-api-access-28nt2\") on node \"crc\" DevicePath \"\"" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.265097 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.265134 4750 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2d98d1d6-d8c3-4b5c-b848-afceef7706f4-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.491557 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" event={"ID":"2d98d1d6-d8c3-4b5c-b848-afceef7706f4","Type":"ContainerDied","Data":"b2a4f2b0b519d5c7b9ab62b8eb6c9bc85176d969409fed075350b4fa662e373e"} Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.492032 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2a4f2b0b519d5c7b9ab62b8eb6c9bc85176d969409fed075350b4fa662e373e" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.491617 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cdx5g" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.623724 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb"] Feb 14 14:30:21 crc kubenswrapper[4750]: E0214 14:30:21.624546 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213a52b1-d668-4878-a752-4254ae6e6534" containerName="collect-profiles" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.624578 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="213a52b1-d668-4878-a752-4254ae6e6534" containerName="collect-profiles" Feb 14 14:30:21 crc kubenswrapper[4750]: E0214 14:30:21.624596 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d98d1d6-d8c3-4b5c-b848-afceef7706f4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.624610 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d98d1d6-d8c3-4b5c-b848-afceef7706f4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.625042 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="213a52b1-d668-4878-a752-4254ae6e6534" containerName="collect-profiles" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.625080 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d98d1d6-d8c3-4b5c-b848-afceef7706f4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.626531 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.629634 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.631468 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.631760 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.633785 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.633985 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.634502 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.636094 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb"] Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.779488 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.779885 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.780036 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.780178 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.780374 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.780455 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclk8\" (UniqueName: \"kubernetes.io/projected/868da7c8-8b42-419d-9801-06c947d3333c-kube-api-access-fclk8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.883380 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.883515 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.883590 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.883714 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.883769 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fclk8\" (UniqueName: \"kubernetes.io/projected/868da7c8-8b42-419d-9801-06c947d3333c-kube-api-access-fclk8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.883869 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.887285 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.887711 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.888547 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.897294 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.902424 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.904292 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fclk8\" (UniqueName: \"kubernetes.io/projected/868da7c8-8b42-419d-9801-06c947d3333c-kube-api-access-fclk8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:21 crc kubenswrapper[4750]: I0214 14:30:21.951771 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:30:22 crc kubenswrapper[4750]: I0214 14:30:22.572949 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb"] Feb 14 14:30:22 crc kubenswrapper[4750]: I0214 14:30:22.584221 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:30:23 crc kubenswrapper[4750]: I0214 14:30:23.517098 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" event={"ID":"868da7c8-8b42-419d-9801-06c947d3333c","Type":"ContainerStarted","Data":"c5e896b14c756dc0b81708489b5ca22efbdc41f84f735382ff78d46dd157f9bd"} Feb 14 14:30:24 crc kubenswrapper[4750]: I0214 14:30:24.541907 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" event={"ID":"868da7c8-8b42-419d-9801-06c947d3333c","Type":"ContainerStarted","Data":"4ada7d03aa6870615b3baa3868539591332f73d29e20cb824a5bf7026f415c56"} Feb 14 14:30:24 crc kubenswrapper[4750]: I0214 14:30:24.589190 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" podStartSLOduration=2.651770217 podStartE2EDuration="3.58916124s" podCreationTimestamp="2026-02-14 14:30:21 +0000 UTC" firstStartedPulling="2026-02-14 14:30:22.583970819 +0000 UTC m=+2294.609960300" lastFinishedPulling="2026-02-14 14:30:23.521361842 +0000 UTC m=+2295.547351323" observedRunningTime="2026-02-14 14:30:24.576075848 +0000 UTC m=+2296.602065349" watchObservedRunningTime="2026-02-14 14:30:24.58916124 +0000 UTC m=+2296.615150751" Feb 14 14:30:26 crc kubenswrapper[4750]: I0214 14:30:26.681197 4750 scope.go:117] "RemoveContainer" containerID="e6eb83d078e2f91bf8a61029bdbd9ef94c21719bdff8a2fb4e93ceebe278e0e2" Feb 14 14:30:26 crc kubenswrapper[4750]: I0214 14:30:26.738740 4750 scope.go:117] "RemoveContainer" containerID="7c4578f066aec2dd97ad7a4113f70b265c30d2f0b662e953b837dd48e5ee1535" Feb 14 14:30:30 crc kubenswrapper[4750]: I0214 14:30:30.742061 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:30:30 crc kubenswrapper[4750]: E0214 14:30:30.743026 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:30:42 crc kubenswrapper[4750]: I0214 14:30:42.741969 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:30:42 crc kubenswrapper[4750]: E0214 14:30:42.742823 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:30:56 crc kubenswrapper[4750]: I0214 14:30:56.742434 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:30:56 crc kubenswrapper[4750]: E0214 14:30:56.744263 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:31:10 crc kubenswrapper[4750]: I0214 14:31:10.743800 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:31:10 crc kubenswrapper[4750]: E0214 14:31:10.745071 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:31:12 crc kubenswrapper[4750]: I0214 14:31:12.104593 4750 generic.go:334] "Generic (PLEG): container finished" podID="868da7c8-8b42-419d-9801-06c947d3333c" containerID="4ada7d03aa6870615b3baa3868539591332f73d29e20cb824a5bf7026f415c56" exitCode=0 Feb 14 14:31:12 crc kubenswrapper[4750]: I0214 14:31:12.104761 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" event={"ID":"868da7c8-8b42-419d-9801-06c947d3333c","Type":"ContainerDied","Data":"4ada7d03aa6870615b3baa3868539591332f73d29e20cb824a5bf7026f415c56"} Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.625743 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.730360 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fclk8\" (UniqueName: \"kubernetes.io/projected/868da7c8-8b42-419d-9801-06c947d3333c-kube-api-access-fclk8\") pod \"868da7c8-8b42-419d-9801-06c947d3333c\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.730622 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-metadata-combined-ca-bundle\") pod \"868da7c8-8b42-419d-9801-06c947d3333c\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.730662 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-ssh-key-openstack-edpm-ipam\") pod \"868da7c8-8b42-419d-9801-06c947d3333c\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.730828 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-inventory\") pod \"868da7c8-8b42-419d-9801-06c947d3333c\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.731008 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"868da7c8-8b42-419d-9801-06c947d3333c\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.731065 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-nova-metadata-neutron-config-0\") pod \"868da7c8-8b42-419d-9801-06c947d3333c\" (UID: \"868da7c8-8b42-419d-9801-06c947d3333c\") " Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.740418 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "868da7c8-8b42-419d-9801-06c947d3333c" (UID: "868da7c8-8b42-419d-9801-06c947d3333c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.740417 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/868da7c8-8b42-419d-9801-06c947d3333c-kube-api-access-fclk8" (OuterVolumeSpecName: "kube-api-access-fclk8") pod "868da7c8-8b42-419d-9801-06c947d3333c" (UID: "868da7c8-8b42-419d-9801-06c947d3333c"). InnerVolumeSpecName "kube-api-access-fclk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.771337 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "868da7c8-8b42-419d-9801-06c947d3333c" (UID: "868da7c8-8b42-419d-9801-06c947d3333c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.772182 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "868da7c8-8b42-419d-9801-06c947d3333c" (UID: "868da7c8-8b42-419d-9801-06c947d3333c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.773379 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-inventory" (OuterVolumeSpecName: "inventory") pod "868da7c8-8b42-419d-9801-06c947d3333c" (UID: "868da7c8-8b42-419d-9801-06c947d3333c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.774319 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "868da7c8-8b42-419d-9801-06c947d3333c" (UID: "868da7c8-8b42-419d-9801-06c947d3333c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.834077 4750 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.834127 4750 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.834143 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fclk8\" (UniqueName: \"kubernetes.io/projected/868da7c8-8b42-419d-9801-06c947d3333c-kube-api-access-fclk8\") on node \"crc\" DevicePath \"\"" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.834158 4750 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.834170 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:31:13 crc kubenswrapper[4750]: I0214 14:31:13.834182 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/868da7c8-8b42-419d-9801-06c947d3333c-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.136150 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" event={"ID":"868da7c8-8b42-419d-9801-06c947d3333c","Type":"ContainerDied","Data":"c5e896b14c756dc0b81708489b5ca22efbdc41f84f735382ff78d46dd157f9bd"} Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.136227 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e896b14c756dc0b81708489b5ca22efbdc41f84f735382ff78d46dd157f9bd" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.136194 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.443447 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4"] Feb 14 14:31:14 crc kubenswrapper[4750]: E0214 14:31:14.444147 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="868da7c8-8b42-419d-9801-06c947d3333c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.444179 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="868da7c8-8b42-419d-9801-06c947d3333c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.444603 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="868da7c8-8b42-419d-9801-06c947d3333c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.445838 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.449074 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.450006 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.450891 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.453806 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.454342 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.460855 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4"] Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.555634 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.555739 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.555805 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.555856 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnsgx\" (UniqueName: \"kubernetes.io/projected/7184cd06-d52e-49d6-9a58-520b47303252-kube-api-access-hnsgx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.555911 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.658238 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.658703 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnsgx\" (UniqueName: \"kubernetes.io/projected/7184cd06-d52e-49d6-9a58-520b47303252-kube-api-access-hnsgx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.658784 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.658883 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.658962 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.661800 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.662457 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.663350 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.663959 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.677907 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnsgx\" (UniqueName: \"kubernetes.io/projected/7184cd06-d52e-49d6-9a58-520b47303252-kube-api-access-hnsgx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hggq4\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:14 crc kubenswrapper[4750]: I0214 14:31:14.781227 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:31:15 crc kubenswrapper[4750]: I0214 14:31:15.402828 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4"] Feb 14 14:31:16 crc kubenswrapper[4750]: I0214 14:31:16.155898 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" event={"ID":"7184cd06-d52e-49d6-9a58-520b47303252","Type":"ContainerStarted","Data":"c4442f0b52be8200abe31db3bf5c1dea79723e441f20201399235219da40ad91"} Feb 14 14:31:17 crc kubenswrapper[4750]: I0214 14:31:17.172404 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" event={"ID":"7184cd06-d52e-49d6-9a58-520b47303252","Type":"ContainerStarted","Data":"3ce9869694d9de2ba5e8ffab5de5e8aab8b53eb3452ee88c4a3d70c80888d7c2"} Feb 14 14:31:22 crc kubenswrapper[4750]: I0214 14:31:22.743053 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:31:22 crc kubenswrapper[4750]: E0214 14:31:22.745722 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:31:37 crc kubenswrapper[4750]: I0214 14:31:37.742090 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:31:37 crc kubenswrapper[4750]: E0214 14:31:37.742911 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:31:49 crc kubenswrapper[4750]: I0214 14:31:49.742390 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:31:49 crc kubenswrapper[4750]: E0214 14:31:49.743140 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:32:00 crc kubenswrapper[4750]: I0214 14:32:00.743034 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:32:00 crc kubenswrapper[4750]: E0214 14:32:00.743894 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:32:15 crc kubenswrapper[4750]: I0214 14:32:15.742732 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:32:15 crc kubenswrapper[4750]: E0214 14:32:15.743940 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:32:29 crc kubenswrapper[4750]: I0214 14:32:29.742659 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:32:29 crc kubenswrapper[4750]: E0214 14:32:29.743620 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:32:41 crc kubenswrapper[4750]: I0214 14:32:41.743795 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:32:41 crc kubenswrapper[4750]: E0214 14:32:41.745251 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:32:53 crc kubenswrapper[4750]: I0214 14:32:53.742628 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:32:53 crc kubenswrapper[4750]: E0214 14:32:53.743500 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:33:04 crc kubenswrapper[4750]: I0214 14:33:04.743372 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:33:04 crc kubenswrapper[4750]: E0214 14:33:04.744819 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:33:18 crc kubenswrapper[4750]: I0214 14:33:18.757896 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:33:18 crc kubenswrapper[4750]: E0214 14:33:18.759248 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:33:30 crc kubenswrapper[4750]: I0214 14:33:30.742405 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:33:30 crc kubenswrapper[4750]: E0214 14:33:30.743014 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:33:43 crc kubenswrapper[4750]: I0214 14:33:43.745573 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:33:43 crc kubenswrapper[4750]: E0214 14:33:43.751145 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:33:54 crc kubenswrapper[4750]: I0214 14:33:54.741987 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:33:54 crc kubenswrapper[4750]: E0214 14:33:54.742900 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:34:09 crc kubenswrapper[4750]: I0214 14:34:09.742266 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:34:09 crc kubenswrapper[4750]: E0214 14:34:09.743211 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:34:21 crc kubenswrapper[4750]: I0214 14:34:21.742562 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:34:21 crc kubenswrapper[4750]: E0214 14:34:21.743959 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:34:32 crc kubenswrapper[4750]: I0214 14:34:32.742950 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:34:32 crc kubenswrapper[4750]: E0214 14:34:32.744227 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:34:46 crc kubenswrapper[4750]: I0214 14:34:46.742511 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:34:46 crc kubenswrapper[4750]: E0214 14:34:46.744156 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:34:48 crc kubenswrapper[4750]: I0214 14:34:48.734628 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" podStartSLOduration=214.263642999 podStartE2EDuration="3m34.734607924s" podCreationTimestamp="2026-02-14 14:31:14 +0000 UTC" firstStartedPulling="2026-02-14 14:31:15.40845961 +0000 UTC m=+2347.434449091" lastFinishedPulling="2026-02-14 14:31:15.879424505 +0000 UTC m=+2347.905414016" observedRunningTime="2026-02-14 14:31:17.192618334 +0000 UTC m=+2349.218607855" watchObservedRunningTime="2026-02-14 14:34:48.734607924 +0000 UTC m=+2560.760597415" Feb 14 14:34:48 crc kubenswrapper[4750]: I0214 14:34:48.764137 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rkql6"] Feb 14 14:34:48 crc kubenswrapper[4750]: I0214 14:34:48.766866 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:48 crc kubenswrapper[4750]: I0214 14:34:48.784896 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rkql6"] Feb 14 14:34:48 crc kubenswrapper[4750]: I0214 14:34:48.950045 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmmlg\" (UniqueName: \"kubernetes.io/projected/7eb225ac-a9b5-460b-8742-e424a0317824-kube-api-access-rmmlg\") pod \"certified-operators-rkql6\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:48 crc kubenswrapper[4750]: I0214 14:34:48.952393 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-catalog-content\") pod \"certified-operators-rkql6\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:48 crc kubenswrapper[4750]: I0214 14:34:48.952466 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-utilities\") pod \"certified-operators-rkql6\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:49 crc kubenswrapper[4750]: I0214 14:34:49.054977 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-catalog-content\") pod \"certified-operators-rkql6\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:49 crc kubenswrapper[4750]: I0214 14:34:49.055033 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-utilities\") pod \"certified-operators-rkql6\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:49 crc kubenswrapper[4750]: I0214 14:34:49.055557 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-utilities\") pod \"certified-operators-rkql6\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:49 crc kubenswrapper[4750]: I0214 14:34:49.055676 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-catalog-content\") pod \"certified-operators-rkql6\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:49 crc kubenswrapper[4750]: I0214 14:34:49.055815 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmmlg\" (UniqueName: \"kubernetes.io/projected/7eb225ac-a9b5-460b-8742-e424a0317824-kube-api-access-rmmlg\") pod \"certified-operators-rkql6\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:49 crc kubenswrapper[4750]: I0214 14:34:49.083708 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmmlg\" (UniqueName: \"kubernetes.io/projected/7eb225ac-a9b5-460b-8742-e424a0317824-kube-api-access-rmmlg\") pod \"certified-operators-rkql6\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:49 crc kubenswrapper[4750]: I0214 14:34:49.095610 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:49 crc kubenswrapper[4750]: I0214 14:34:49.636228 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rkql6"] Feb 14 14:34:50 crc kubenswrapper[4750]: I0214 14:34:49.999755 4750 generic.go:334] "Generic (PLEG): container finished" podID="7eb225ac-a9b5-460b-8742-e424a0317824" containerID="8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2" exitCode=0 Feb 14 14:34:50 crc kubenswrapper[4750]: I0214 14:34:49.999986 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkql6" event={"ID":"7eb225ac-a9b5-460b-8742-e424a0317824","Type":"ContainerDied","Data":"8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2"} Feb 14 14:34:50 crc kubenswrapper[4750]: I0214 14:34:50.000107 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkql6" event={"ID":"7eb225ac-a9b5-460b-8742-e424a0317824","Type":"ContainerStarted","Data":"b16b5eb876770e79f201444791377bdba060670c53ec94d62e1558053b8807ae"} Feb 14 14:34:51 crc kubenswrapper[4750]: I0214 14:34:51.016901 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkql6" event={"ID":"7eb225ac-a9b5-460b-8742-e424a0317824","Type":"ContainerStarted","Data":"401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65"} Feb 14 14:34:53 crc kubenswrapper[4750]: I0214 14:34:53.046644 4750 generic.go:334] "Generic (PLEG): container finished" podID="7eb225ac-a9b5-460b-8742-e424a0317824" containerID="401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65" exitCode=0 Feb 14 14:34:53 crc kubenswrapper[4750]: I0214 14:34:53.046720 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkql6" event={"ID":"7eb225ac-a9b5-460b-8742-e424a0317824","Type":"ContainerDied","Data":"401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65"} Feb 14 14:34:54 crc kubenswrapper[4750]: I0214 14:34:54.063632 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkql6" event={"ID":"7eb225ac-a9b5-460b-8742-e424a0317824","Type":"ContainerStarted","Data":"32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7"} Feb 14 14:34:54 crc kubenswrapper[4750]: I0214 14:34:54.092089 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rkql6" podStartSLOduration=2.650684264 podStartE2EDuration="6.092068232s" podCreationTimestamp="2026-02-14 14:34:48 +0000 UTC" firstStartedPulling="2026-02-14 14:34:50.002154197 +0000 UTC m=+2562.028143678" lastFinishedPulling="2026-02-14 14:34:53.443538155 +0000 UTC m=+2565.469527646" observedRunningTime="2026-02-14 14:34:54.085191286 +0000 UTC m=+2566.111180777" watchObservedRunningTime="2026-02-14 14:34:54.092068232 +0000 UTC m=+2566.118057733" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.113826 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2th"] Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.116708 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.135722 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2th"] Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.252960 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-utilities\") pod \"redhat-marketplace-cv2th\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.253021 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84hlb\" (UniqueName: \"kubernetes.io/projected/954500fd-03ec-442e-bf06-494193ad0bce-kube-api-access-84hlb\") pod \"redhat-marketplace-cv2th\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.253732 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-catalog-content\") pod \"redhat-marketplace-cv2th\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.355954 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-catalog-content\") pod \"redhat-marketplace-cv2th\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.356075 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84hlb\" (UniqueName: \"kubernetes.io/projected/954500fd-03ec-442e-bf06-494193ad0bce-kube-api-access-84hlb\") pod \"redhat-marketplace-cv2th\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.356093 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-utilities\") pod \"redhat-marketplace-cv2th\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.356497 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-catalog-content\") pod \"redhat-marketplace-cv2th\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.356546 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-utilities\") pod \"redhat-marketplace-cv2th\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.379443 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84hlb\" (UniqueName: \"kubernetes.io/projected/954500fd-03ec-442e-bf06-494193ad0bce-kube-api-access-84hlb\") pod \"redhat-marketplace-cv2th\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.447057 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:34:56 crc kubenswrapper[4750]: I0214 14:34:56.996162 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2th"] Feb 14 14:34:57 crc kubenswrapper[4750]: I0214 14:34:57.105499 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2th" event={"ID":"954500fd-03ec-442e-bf06-494193ad0bce","Type":"ContainerStarted","Data":"829cc34843a371c9e4af72032a4905ccd8b775090a590ced7683a3bcb9e2828c"} Feb 14 14:34:57 crc kubenswrapper[4750]: I0214 14:34:57.741967 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:34:57 crc kubenswrapper[4750]: E0214 14:34:57.742602 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:34:58 crc kubenswrapper[4750]: I0214 14:34:58.120543 4750 generic.go:334] "Generic (PLEG): container finished" podID="954500fd-03ec-442e-bf06-494193ad0bce" containerID="90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796" exitCode=0 Feb 14 14:34:58 crc kubenswrapper[4750]: I0214 14:34:58.120615 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2th" event={"ID":"954500fd-03ec-442e-bf06-494193ad0bce","Type":"ContainerDied","Data":"90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796"} Feb 14 14:34:59 crc kubenswrapper[4750]: I0214 14:34:59.096338 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:59 crc kubenswrapper[4750]: I0214 14:34:59.098451 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:34:59 crc kubenswrapper[4750]: I0214 14:34:59.133619 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2th" event={"ID":"954500fd-03ec-442e-bf06-494193ad0bce","Type":"ContainerStarted","Data":"2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593"} Feb 14 14:35:00 crc kubenswrapper[4750]: I0214 14:35:00.149015 4750 generic.go:334] "Generic (PLEG): container finished" podID="954500fd-03ec-442e-bf06-494193ad0bce" containerID="2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593" exitCode=0 Feb 14 14:35:00 crc kubenswrapper[4750]: I0214 14:35:00.149073 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2th" event={"ID":"954500fd-03ec-442e-bf06-494193ad0bce","Type":"ContainerDied","Data":"2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593"} Feb 14 14:35:00 crc kubenswrapper[4750]: I0214 14:35:00.165248 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rkql6" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" containerName="registry-server" probeResult="failure" output=< Feb 14 14:35:00 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:35:00 crc kubenswrapper[4750]: > Feb 14 14:35:01 crc kubenswrapper[4750]: I0214 14:35:01.169067 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2th" event={"ID":"954500fd-03ec-442e-bf06-494193ad0bce","Type":"ContainerStarted","Data":"6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391"} Feb 14 14:35:01 crc kubenswrapper[4750]: I0214 14:35:01.195996 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cv2th" podStartSLOduration=2.70199319 podStartE2EDuration="5.195980253s" podCreationTimestamp="2026-02-14 14:34:56 +0000 UTC" firstStartedPulling="2026-02-14 14:34:58.12388074 +0000 UTC m=+2570.149870221" lastFinishedPulling="2026-02-14 14:35:00.617867793 +0000 UTC m=+2572.643857284" observedRunningTime="2026-02-14 14:35:01.189488238 +0000 UTC m=+2573.215477719" watchObservedRunningTime="2026-02-14 14:35:01.195980253 +0000 UTC m=+2573.221969734" Feb 14 14:35:06 crc kubenswrapper[4750]: I0214 14:35:06.236607 4750 generic.go:334] "Generic (PLEG): container finished" podID="7184cd06-d52e-49d6-9a58-520b47303252" containerID="3ce9869694d9de2ba5e8ffab5de5e8aab8b53eb3452ee88c4a3d70c80888d7c2" exitCode=0 Feb 14 14:35:06 crc kubenswrapper[4750]: I0214 14:35:06.236786 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" event={"ID":"7184cd06-d52e-49d6-9a58-520b47303252","Type":"ContainerDied","Data":"3ce9869694d9de2ba5e8ffab5de5e8aab8b53eb3452ee88c4a3d70c80888d7c2"} Feb 14 14:35:06 crc kubenswrapper[4750]: I0214 14:35:06.449961 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:35:06 crc kubenswrapper[4750]: I0214 14:35:06.450362 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.509248 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cv2th" podUID="954500fd-03ec-442e-bf06-494193ad0bce" containerName="registry-server" probeResult="failure" output=< Feb 14 14:35:07 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:35:07 crc kubenswrapper[4750]: > Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.870489 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.959865 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-secret-0\") pod \"7184cd06-d52e-49d6-9a58-520b47303252\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.960675 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-combined-ca-bundle\") pod \"7184cd06-d52e-49d6-9a58-520b47303252\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.960922 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnsgx\" (UniqueName: \"kubernetes.io/projected/7184cd06-d52e-49d6-9a58-520b47303252-kube-api-access-hnsgx\") pod \"7184cd06-d52e-49d6-9a58-520b47303252\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.961055 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-inventory\") pod \"7184cd06-d52e-49d6-9a58-520b47303252\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.961185 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-ssh-key-openstack-edpm-ipam\") pod \"7184cd06-d52e-49d6-9a58-520b47303252\" (UID: \"7184cd06-d52e-49d6-9a58-520b47303252\") " Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.965802 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7184cd06-d52e-49d6-9a58-520b47303252" (UID: "7184cd06-d52e-49d6-9a58-520b47303252"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.968441 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7184cd06-d52e-49d6-9a58-520b47303252-kube-api-access-hnsgx" (OuterVolumeSpecName: "kube-api-access-hnsgx") pod "7184cd06-d52e-49d6-9a58-520b47303252" (UID: "7184cd06-d52e-49d6-9a58-520b47303252"). InnerVolumeSpecName "kube-api-access-hnsgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.992275 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7184cd06-d52e-49d6-9a58-520b47303252" (UID: "7184cd06-d52e-49d6-9a58-520b47303252"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:35:07 crc kubenswrapper[4750]: I0214 14:35:07.994487 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-inventory" (OuterVolumeSpecName: "inventory") pod "7184cd06-d52e-49d6-9a58-520b47303252" (UID: "7184cd06-d52e-49d6-9a58-520b47303252"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.005426 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7184cd06-d52e-49d6-9a58-520b47303252" (UID: "7184cd06-d52e-49d6-9a58-520b47303252"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.064908 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.064965 4750 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.064983 4750 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.064996 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnsgx\" (UniqueName: \"kubernetes.io/projected/7184cd06-d52e-49d6-9a58-520b47303252-kube-api-access-hnsgx\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.065005 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7184cd06-d52e-49d6-9a58-520b47303252-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.264314 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" event={"ID":"7184cd06-d52e-49d6-9a58-520b47303252","Type":"ContainerDied","Data":"c4442f0b52be8200abe31db3bf5c1dea79723e441f20201399235219da40ad91"} Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.264366 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4442f0b52be8200abe31db3bf5c1dea79723e441f20201399235219da40ad91" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.264452 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hggq4" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.418341 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf"] Feb 14 14:35:08 crc kubenswrapper[4750]: E0214 14:35:08.419258 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7184cd06-d52e-49d6-9a58-520b47303252" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.419278 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="7184cd06-d52e-49d6-9a58-520b47303252" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.419710 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="7184cd06-d52e-49d6-9a58-520b47303252" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.421389 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.431566 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.431860 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.432035 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.432263 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.434462 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.434937 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf"] Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.437821 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.441520 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.490529 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.490601 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.490779 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.490847 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.491023 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.491103 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.491186 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffmq7\" (UniqueName: \"kubernetes.io/projected/e65f04d6-c4d8-4999-8a87-be675256e775-kube-api-access-ffmq7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.491290 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e65f04d6-c4d8-4999-8a87-be675256e775-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.491468 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.491532 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.491588 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.593921 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.593974 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.593999 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffmq7\" (UniqueName: \"kubernetes.io/projected/e65f04d6-c4d8-4999-8a87-be675256e775-kube-api-access-ffmq7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.594035 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e65f04d6-c4d8-4999-8a87-be675256e775-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.594168 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.594203 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.594224 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.594264 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.594298 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.594332 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.594352 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.596095 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e65f04d6-c4d8-4999-8a87-be675256e775-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.599202 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.599419 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.599614 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.600246 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.600682 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.601124 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.601992 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.603740 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.605018 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.623486 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffmq7\" (UniqueName: \"kubernetes.io/projected/e65f04d6-c4d8-4999-8a87-be675256e775-kube-api-access-ffmq7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nq9nf\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.757867 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:35:08 crc kubenswrapper[4750]: I0214 14:35:08.764574 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:35:09 crc kubenswrapper[4750]: I0214 14:35:09.180579 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:35:09 crc kubenswrapper[4750]: I0214 14:35:09.240272 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:35:09 crc kubenswrapper[4750]: I0214 14:35:09.414645 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf"] Feb 14 14:35:09 crc kubenswrapper[4750]: I0214 14:35:09.433788 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rkql6"] Feb 14 14:35:09 crc kubenswrapper[4750]: I0214 14:35:09.870677 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.287671 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" event={"ID":"e65f04d6-c4d8-4999-8a87-be675256e775","Type":"ContainerStarted","Data":"4e92c774325d9ef337160fe4e2f7879375eb9889725f5052d388c59aa8584055"} Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.288023 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" event={"ID":"e65f04d6-c4d8-4999-8a87-be675256e775","Type":"ContainerStarted","Data":"dd6ad30f86ce4a7bd5e230f72d1517fbcd2e756d7b42b23442cc97fae5497df2"} Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.287783 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rkql6" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" containerName="registry-server" containerID="cri-o://32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7" gracePeriod=2 Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.330968 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" podStartSLOduration=1.886753647 podStartE2EDuration="2.330941189s" podCreationTimestamp="2026-02-14 14:35:08 +0000 UTC" firstStartedPulling="2026-02-14 14:35:09.423184963 +0000 UTC m=+2581.449174444" lastFinishedPulling="2026-02-14 14:35:09.867372505 +0000 UTC m=+2581.893361986" observedRunningTime="2026-02-14 14:35:10.318961678 +0000 UTC m=+2582.344951169" watchObservedRunningTime="2026-02-14 14:35:10.330941189 +0000 UTC m=+2582.356930670" Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.835092 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.857507 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-catalog-content\") pod \"7eb225ac-a9b5-460b-8742-e424a0317824\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.857661 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-utilities\") pod \"7eb225ac-a9b5-460b-8742-e424a0317824\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.857702 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmmlg\" (UniqueName: \"kubernetes.io/projected/7eb225ac-a9b5-460b-8742-e424a0317824-kube-api-access-rmmlg\") pod \"7eb225ac-a9b5-460b-8742-e424a0317824\" (UID: \"7eb225ac-a9b5-460b-8742-e424a0317824\") " Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.868245 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb225ac-a9b5-460b-8742-e424a0317824-kube-api-access-rmmlg" (OuterVolumeSpecName: "kube-api-access-rmmlg") pod "7eb225ac-a9b5-460b-8742-e424a0317824" (UID: "7eb225ac-a9b5-460b-8742-e424a0317824"). InnerVolumeSpecName "kube-api-access-rmmlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.868992 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-utilities" (OuterVolumeSpecName: "utilities") pod "7eb225ac-a9b5-460b-8742-e424a0317824" (UID: "7eb225ac-a9b5-460b-8742-e424a0317824"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.935048 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7eb225ac-a9b5-460b-8742-e424a0317824" (UID: "7eb225ac-a9b5-460b-8742-e424a0317824"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.960667 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.960960 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmmlg\" (UniqueName: \"kubernetes.io/projected/7eb225ac-a9b5-460b-8742-e424a0317824-kube-api-access-rmmlg\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:10 crc kubenswrapper[4750]: I0214 14:35:10.960970 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb225ac-a9b5-460b-8742-e424a0317824-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.310674 4750 generic.go:334] "Generic (PLEG): container finished" podID="7eb225ac-a9b5-460b-8742-e424a0317824" containerID="32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7" exitCode=0 Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.315572 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkql6" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.315891 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkql6" event={"ID":"7eb225ac-a9b5-460b-8742-e424a0317824","Type":"ContainerDied","Data":"32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7"} Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.315958 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkql6" event={"ID":"7eb225ac-a9b5-460b-8742-e424a0317824","Type":"ContainerDied","Data":"b16b5eb876770e79f201444791377bdba060670c53ec94d62e1558053b8807ae"} Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.315993 4750 scope.go:117] "RemoveContainer" containerID="32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.363428 4750 scope.go:117] "RemoveContainer" containerID="401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.366410 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rkql6"] Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.380883 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rkql6"] Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.390585 4750 scope.go:117] "RemoveContainer" containerID="8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.463020 4750 scope.go:117] "RemoveContainer" containerID="32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7" Feb 14 14:35:11 crc kubenswrapper[4750]: E0214 14:35:11.463556 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7\": container with ID starting with 32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7 not found: ID does not exist" containerID="32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.463590 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7"} err="failed to get container status \"32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7\": rpc error: code = NotFound desc = could not find container \"32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7\": container with ID starting with 32e870bc17580bc4c010023f10a8f4ccc06ee1ad9e5cbe4b62b92651614e2ee7 not found: ID does not exist" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.463612 4750 scope.go:117] "RemoveContainer" containerID="401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65" Feb 14 14:35:11 crc kubenswrapper[4750]: E0214 14:35:11.463952 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65\": container with ID starting with 401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65 not found: ID does not exist" containerID="401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.464001 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65"} err="failed to get container status \"401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65\": rpc error: code = NotFound desc = could not find container \"401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65\": container with ID starting with 401e572ca27779f6da504eeb0c128c19bd0c58dff3644668810917d7fbc0ec65 not found: ID does not exist" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.464034 4750 scope.go:117] "RemoveContainer" containerID="8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2" Feb 14 14:35:11 crc kubenswrapper[4750]: E0214 14:35:11.464357 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2\": container with ID starting with 8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2 not found: ID does not exist" containerID="8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2" Feb 14 14:35:11 crc kubenswrapper[4750]: I0214 14:35:11.464392 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2"} err="failed to get container status \"8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2\": rpc error: code = NotFound desc = could not find container \"8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2\": container with ID starting with 8af2e367d80b1d292e5f6628d1244c81f56781559bfe819921f1308c98e654d2 not found: ID does not exist" Feb 14 14:35:12 crc kubenswrapper[4750]: I0214 14:35:12.742547 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:35:12 crc kubenswrapper[4750]: I0214 14:35:12.765401 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" path="/var/lib/kubelet/pods/7eb225ac-a9b5-460b-8742-e424a0317824/volumes" Feb 14 14:35:13 crc kubenswrapper[4750]: I0214 14:35:13.343854 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"69c59495d37779ccc5f344088d3c0e426e25852bfc1efeee9621dc4500593e17"} Feb 14 14:35:16 crc kubenswrapper[4750]: I0214 14:35:16.524845 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:35:16 crc kubenswrapper[4750]: I0214 14:35:16.610241 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:35:16 crc kubenswrapper[4750]: I0214 14:35:16.793438 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2th"] Feb 14 14:35:18 crc kubenswrapper[4750]: I0214 14:35:18.411987 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cv2th" podUID="954500fd-03ec-442e-bf06-494193ad0bce" containerName="registry-server" containerID="cri-o://6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391" gracePeriod=2 Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.003626 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.205350 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84hlb\" (UniqueName: \"kubernetes.io/projected/954500fd-03ec-442e-bf06-494193ad0bce-kube-api-access-84hlb\") pod \"954500fd-03ec-442e-bf06-494193ad0bce\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.205529 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-catalog-content\") pod \"954500fd-03ec-442e-bf06-494193ad0bce\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.205992 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-utilities\") pod \"954500fd-03ec-442e-bf06-494193ad0bce\" (UID: \"954500fd-03ec-442e-bf06-494193ad0bce\") " Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.207158 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-utilities" (OuterVolumeSpecName: "utilities") pod "954500fd-03ec-442e-bf06-494193ad0bce" (UID: "954500fd-03ec-442e-bf06-494193ad0bce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.210459 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.214257 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/954500fd-03ec-442e-bf06-494193ad0bce-kube-api-access-84hlb" (OuterVolumeSpecName: "kube-api-access-84hlb") pod "954500fd-03ec-442e-bf06-494193ad0bce" (UID: "954500fd-03ec-442e-bf06-494193ad0bce"). InnerVolumeSpecName "kube-api-access-84hlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.234625 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "954500fd-03ec-442e-bf06-494193ad0bce" (UID: "954500fd-03ec-442e-bf06-494193ad0bce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.311616 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84hlb\" (UniqueName: \"kubernetes.io/projected/954500fd-03ec-442e-bf06-494193ad0bce-kube-api-access-84hlb\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.311650 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954500fd-03ec-442e-bf06-494193ad0bce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.423804 4750 generic.go:334] "Generic (PLEG): container finished" podID="954500fd-03ec-442e-bf06-494193ad0bce" containerID="6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391" exitCode=0 Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.423877 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv2th" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.423912 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2th" event={"ID":"954500fd-03ec-442e-bf06-494193ad0bce","Type":"ContainerDied","Data":"6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391"} Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.426963 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2th" event={"ID":"954500fd-03ec-442e-bf06-494193ad0bce","Type":"ContainerDied","Data":"829cc34843a371c9e4af72032a4905ccd8b775090a590ced7683a3bcb9e2828c"} Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.426984 4750 scope.go:117] "RemoveContainer" containerID="6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.466171 4750 scope.go:117] "RemoveContainer" containerID="2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.490776 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2th"] Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.509386 4750 scope.go:117] "RemoveContainer" containerID="90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.510814 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2th"] Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.606725 4750 scope.go:117] "RemoveContainer" containerID="6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391" Feb 14 14:35:19 crc kubenswrapper[4750]: E0214 14:35:19.610512 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391\": container with ID starting with 6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391 not found: ID does not exist" containerID="6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.610555 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391"} err="failed to get container status \"6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391\": rpc error: code = NotFound desc = could not find container \"6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391\": container with ID starting with 6c611c7b8b10e8cc559ffd3c3a27b4b32ab2a98e1442a2464c56a1defd928391 not found: ID does not exist" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.610582 4750 scope.go:117] "RemoveContainer" containerID="2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593" Feb 14 14:35:19 crc kubenswrapper[4750]: E0214 14:35:19.611064 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593\": container with ID starting with 2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593 not found: ID does not exist" containerID="2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.611137 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593"} err="failed to get container status \"2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593\": rpc error: code = NotFound desc = could not find container \"2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593\": container with ID starting with 2b07e71124724d40f675ab040abd7dc99d37f0ca3f2fe1fd6fd8c3488ae08593 not found: ID does not exist" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.611167 4750 scope.go:117] "RemoveContainer" containerID="90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796" Feb 14 14:35:19 crc kubenswrapper[4750]: E0214 14:35:19.611470 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796\": container with ID starting with 90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796 not found: ID does not exist" containerID="90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796" Feb 14 14:35:19 crc kubenswrapper[4750]: I0214 14:35:19.611496 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796"} err="failed to get container status \"90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796\": rpc error: code = NotFound desc = could not find container \"90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796\": container with ID starting with 90af790730f82ba063d5074a529d3d22637d4355a949d3627f682c176e179796 not found: ID does not exist" Feb 14 14:35:20 crc kubenswrapper[4750]: I0214 14:35:20.763897 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="954500fd-03ec-442e-bf06-494193ad0bce" path="/var/lib/kubelet/pods/954500fd-03ec-442e-bf06-494193ad0bce/volumes" Feb 14 14:37:30 crc kubenswrapper[4750]: I0214 14:37:30.129221 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:37:30 crc kubenswrapper[4750]: I0214 14:37:30.129717 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:37:35 crc kubenswrapper[4750]: I0214 14:37:35.060565 4750 generic.go:334] "Generic (PLEG): container finished" podID="e65f04d6-c4d8-4999-8a87-be675256e775" containerID="4e92c774325d9ef337160fe4e2f7879375eb9889725f5052d388c59aa8584055" exitCode=0 Feb 14 14:37:35 crc kubenswrapper[4750]: I0214 14:37:35.060621 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" event={"ID":"e65f04d6-c4d8-4999-8a87-be675256e775","Type":"ContainerDied","Data":"4e92c774325d9ef337160fe4e2f7879375eb9889725f5052d388c59aa8584055"} Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.610802 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.763339 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-0\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.763818 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-combined-ca-bundle\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.764843 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-2\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.764937 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-1\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.765009 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e65f04d6-c4d8-4999-8a87-be675256e775-nova-extra-config-0\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.765090 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-ssh-key-openstack-edpm-ipam\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.765315 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-0\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.765444 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-1\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.766615 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffmq7\" (UniqueName: \"kubernetes.io/projected/e65f04d6-c4d8-4999-8a87-be675256e775-kube-api-access-ffmq7\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.766695 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-3\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.766755 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-inventory\") pod \"e65f04d6-c4d8-4999-8a87-be675256e775\" (UID: \"e65f04d6-c4d8-4999-8a87-be675256e775\") " Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.773984 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65f04d6-c4d8-4999-8a87-be675256e775-kube-api-access-ffmq7" (OuterVolumeSpecName: "kube-api-access-ffmq7") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "kube-api-access-ffmq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.774280 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.813592 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.824037 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.825322 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e65f04d6-c4d8-4999-8a87-be675256e775-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.839790 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.839997 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.844414 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.848187 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-inventory" (OuterVolumeSpecName: "inventory") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.855996 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870639 4750 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870670 4750 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870680 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffmq7\" (UniqueName: \"kubernetes.io/projected/e65f04d6-c4d8-4999-8a87-be675256e775-kube-api-access-ffmq7\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870692 4750 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870702 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870712 4750 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870720 4750 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870729 4750 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870738 4750 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e65f04d6-c4d8-4999-8a87-be675256e775-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.870748 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.871444 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e65f04d6-c4d8-4999-8a87-be675256e775" (UID: "e65f04d6-c4d8-4999-8a87-be675256e775"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:37:36 crc kubenswrapper[4750]: I0214 14:37:36.972887 4750 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e65f04d6-c4d8-4999-8a87-be675256e775-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.088359 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" event={"ID":"e65f04d6-c4d8-4999-8a87-be675256e775","Type":"ContainerDied","Data":"dd6ad30f86ce4a7bd5e230f72d1517fbcd2e756d7b42b23442cc97fae5497df2"} Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.088409 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nq9nf" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.088428 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd6ad30f86ce4a7bd5e230f72d1517fbcd2e756d7b42b23442cc97fae5497df2" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219156 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb"] Feb 14 14:37:37 crc kubenswrapper[4750]: E0214 14:37:37.219579 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" containerName="extract-utilities" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219595 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" containerName="extract-utilities" Feb 14 14:37:37 crc kubenswrapper[4750]: E0214 14:37:37.219612 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="954500fd-03ec-442e-bf06-494193ad0bce" containerName="extract-utilities" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219619 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="954500fd-03ec-442e-bf06-494193ad0bce" containerName="extract-utilities" Feb 14 14:37:37 crc kubenswrapper[4750]: E0214 14:37:37.219634 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" containerName="registry-server" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219641 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" containerName="registry-server" Feb 14 14:37:37 crc kubenswrapper[4750]: E0214 14:37:37.219658 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" containerName="extract-content" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219663 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" containerName="extract-content" Feb 14 14:37:37 crc kubenswrapper[4750]: E0214 14:37:37.219681 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65f04d6-c4d8-4999-8a87-be675256e775" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219689 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65f04d6-c4d8-4999-8a87-be675256e775" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 14:37:37 crc kubenswrapper[4750]: E0214 14:37:37.219700 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="954500fd-03ec-442e-bf06-494193ad0bce" containerName="extract-content" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219706 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="954500fd-03ec-442e-bf06-494193ad0bce" containerName="extract-content" Feb 14 14:37:37 crc kubenswrapper[4750]: E0214 14:37:37.219720 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="954500fd-03ec-442e-bf06-494193ad0bce" containerName="registry-server" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219726 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="954500fd-03ec-442e-bf06-494193ad0bce" containerName="registry-server" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219947 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65f04d6-c4d8-4999-8a87-be675256e775" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219963 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="954500fd-03ec-442e-bf06-494193ad0bce" containerName="registry-server" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.219976 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb225ac-a9b5-460b-8742-e424a0317824" containerName="registry-server" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.220891 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.223457 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.226129 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.226363 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.226539 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.236472 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.238131 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb"] Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.381662 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.381740 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.381778 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.382151 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.382303 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.382541 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99lhn\" (UniqueName: \"kubernetes.io/projected/66d16d12-f651-4f21-9160-e22496e7e969-kube-api-access-99lhn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.382677 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.485034 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99lhn\" (UniqueName: \"kubernetes.io/projected/66d16d12-f651-4f21-9160-e22496e7e969-kube-api-access-99lhn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.485125 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.485173 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.485210 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.485233 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.485306 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.485345 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.490892 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.491080 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.492707 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.492770 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.493962 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.502015 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.520082 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99lhn\" (UniqueName: \"kubernetes.io/projected/66d16d12-f651-4f21-9160-e22496e7e969-kube-api-access-99lhn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-58rdb\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:37 crc kubenswrapper[4750]: I0214 14:37:37.590441 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.217132 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb"] Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.225004 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.383461 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-48h46"] Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.389797 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.445507 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qgqv\" (UniqueName: \"kubernetes.io/projected/a262245c-a7f7-4408-903f-d2d32009e399-kube-api-access-6qgqv\") pod \"redhat-operators-48h46\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.445667 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-utilities\") pod \"redhat-operators-48h46\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.445778 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-catalog-content\") pod \"redhat-operators-48h46\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.477406 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48h46"] Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.548733 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-utilities\") pod \"redhat-operators-48h46\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.548866 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-catalog-content\") pod \"redhat-operators-48h46\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.548948 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qgqv\" (UniqueName: \"kubernetes.io/projected/a262245c-a7f7-4408-903f-d2d32009e399-kube-api-access-6qgqv\") pod \"redhat-operators-48h46\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.549912 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-utilities\") pod \"redhat-operators-48h46\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.549949 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-catalog-content\") pod \"redhat-operators-48h46\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.570757 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qgqv\" (UniqueName: \"kubernetes.io/projected/a262245c-a7f7-4408-903f-d2d32009e399-kube-api-access-6qgqv\") pod \"redhat-operators-48h46\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:38 crc kubenswrapper[4750]: I0214 14:37:38.774128 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:39 crc kubenswrapper[4750]: I0214 14:37:39.123419 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" event={"ID":"66d16d12-f651-4f21-9160-e22496e7e969","Type":"ContainerStarted","Data":"0cc72aaca380fcbff6db89212e1dba9c45d97019a398accc0e3fa1debd65a9c8"} Feb 14 14:37:39 crc kubenswrapper[4750]: I0214 14:37:39.123881 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" event={"ID":"66d16d12-f651-4f21-9160-e22496e7e969","Type":"ContainerStarted","Data":"9623cfb6e04f79e5d936512be0da6eeddbe2d620742372371c8355e09df6fac7"} Feb 14 14:37:39 crc kubenswrapper[4750]: I0214 14:37:39.153760 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" podStartSLOduration=1.578514003 podStartE2EDuration="2.153741544s" podCreationTimestamp="2026-02-14 14:37:37 +0000 UTC" firstStartedPulling="2026-02-14 14:37:38.224603807 +0000 UTC m=+2730.250593338" lastFinishedPulling="2026-02-14 14:37:38.799831398 +0000 UTC m=+2730.825820879" observedRunningTime="2026-02-14 14:37:39.143484071 +0000 UTC m=+2731.169473552" watchObservedRunningTime="2026-02-14 14:37:39.153741544 +0000 UTC m=+2731.179731025" Feb 14 14:37:39 crc kubenswrapper[4750]: I0214 14:37:39.329708 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48h46"] Feb 14 14:37:39 crc kubenswrapper[4750]: W0214 14:37:39.335273 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda262245c_a7f7_4408_903f_d2d32009e399.slice/crio-c4a3ea92187f8cb6a5628e47b2618748f6a14b6faf966846eb9d1dc7aa27408a WatchSource:0}: Error finding container c4a3ea92187f8cb6a5628e47b2618748f6a14b6faf966846eb9d1dc7aa27408a: Status 404 returned error can't find the container with id c4a3ea92187f8cb6a5628e47b2618748f6a14b6faf966846eb9d1dc7aa27408a Feb 14 14:37:40 crc kubenswrapper[4750]: I0214 14:37:40.132649 4750 generic.go:334] "Generic (PLEG): container finished" podID="a262245c-a7f7-4408-903f-d2d32009e399" containerID="2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe" exitCode=0 Feb 14 14:37:40 crc kubenswrapper[4750]: I0214 14:37:40.132753 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48h46" event={"ID":"a262245c-a7f7-4408-903f-d2d32009e399","Type":"ContainerDied","Data":"2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe"} Feb 14 14:37:40 crc kubenswrapper[4750]: I0214 14:37:40.132940 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48h46" event={"ID":"a262245c-a7f7-4408-903f-d2d32009e399","Type":"ContainerStarted","Data":"c4a3ea92187f8cb6a5628e47b2618748f6a14b6faf966846eb9d1dc7aa27408a"} Feb 14 14:37:41 crc kubenswrapper[4750]: I0214 14:37:41.145457 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48h46" event={"ID":"a262245c-a7f7-4408-903f-d2d32009e399","Type":"ContainerStarted","Data":"01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be"} Feb 14 14:37:46 crc kubenswrapper[4750]: I0214 14:37:46.195408 4750 generic.go:334] "Generic (PLEG): container finished" podID="a262245c-a7f7-4408-903f-d2d32009e399" containerID="01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be" exitCode=0 Feb 14 14:37:46 crc kubenswrapper[4750]: I0214 14:37:46.195489 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48h46" event={"ID":"a262245c-a7f7-4408-903f-d2d32009e399","Type":"ContainerDied","Data":"01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be"} Feb 14 14:37:47 crc kubenswrapper[4750]: I0214 14:37:47.208275 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48h46" event={"ID":"a262245c-a7f7-4408-903f-d2d32009e399","Type":"ContainerStarted","Data":"38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675"} Feb 14 14:37:47 crc kubenswrapper[4750]: I0214 14:37:47.247272 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-48h46" podStartSLOduration=2.803229503 podStartE2EDuration="9.247250889s" podCreationTimestamp="2026-02-14 14:37:38 +0000 UTC" firstStartedPulling="2026-02-14 14:37:40.134880059 +0000 UTC m=+2732.160869540" lastFinishedPulling="2026-02-14 14:37:46.578901435 +0000 UTC m=+2738.604890926" observedRunningTime="2026-02-14 14:37:47.232699743 +0000 UTC m=+2739.258689224" watchObservedRunningTime="2026-02-14 14:37:47.247250889 +0000 UTC m=+2739.273240370" Feb 14 14:37:48 crc kubenswrapper[4750]: I0214 14:37:48.774973 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:48 crc kubenswrapper[4750]: I0214 14:37:48.776268 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:49 crc kubenswrapper[4750]: I0214 14:37:49.838850 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-48h46" podUID="a262245c-a7f7-4408-903f-d2d32009e399" containerName="registry-server" probeResult="failure" output=< Feb 14 14:37:49 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:37:49 crc kubenswrapper[4750]: > Feb 14 14:37:58 crc kubenswrapper[4750]: I0214 14:37:58.845319 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:37:58 crc kubenswrapper[4750]: I0214 14:37:58.912818 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:38:00 crc kubenswrapper[4750]: I0214 14:38:00.129283 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:38:00 crc kubenswrapper[4750]: I0214 14:38:00.129386 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:38:02 crc kubenswrapper[4750]: I0214 14:38:02.435401 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48h46"] Feb 14 14:38:02 crc kubenswrapper[4750]: I0214 14:38:02.435806 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-48h46" podUID="a262245c-a7f7-4408-903f-d2d32009e399" containerName="registry-server" containerID="cri-o://38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675" gracePeriod=2 Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.002247 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.108674 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qgqv\" (UniqueName: \"kubernetes.io/projected/a262245c-a7f7-4408-903f-d2d32009e399-kube-api-access-6qgqv\") pod \"a262245c-a7f7-4408-903f-d2d32009e399\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.109325 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-catalog-content\") pod \"a262245c-a7f7-4408-903f-d2d32009e399\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.109373 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-utilities\") pod \"a262245c-a7f7-4408-903f-d2d32009e399\" (UID: \"a262245c-a7f7-4408-903f-d2d32009e399\") " Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.111190 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-utilities" (OuterVolumeSpecName: "utilities") pod "a262245c-a7f7-4408-903f-d2d32009e399" (UID: "a262245c-a7f7-4408-903f-d2d32009e399"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.141438 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a262245c-a7f7-4408-903f-d2d32009e399-kube-api-access-6qgqv" (OuterVolumeSpecName: "kube-api-access-6qgqv") pod "a262245c-a7f7-4408-903f-d2d32009e399" (UID: "a262245c-a7f7-4408-903f-d2d32009e399"). InnerVolumeSpecName "kube-api-access-6qgqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.212515 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.212726 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qgqv\" (UniqueName: \"kubernetes.io/projected/a262245c-a7f7-4408-903f-d2d32009e399-kube-api-access-6qgqv\") on node \"crc\" DevicePath \"\"" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.241141 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a262245c-a7f7-4408-903f-d2d32009e399" (UID: "a262245c-a7f7-4408-903f-d2d32009e399"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.314657 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a262245c-a7f7-4408-903f-d2d32009e399-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.404424 4750 generic.go:334] "Generic (PLEG): container finished" podID="a262245c-a7f7-4408-903f-d2d32009e399" containerID="38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675" exitCode=0 Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.404476 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48h46" event={"ID":"a262245c-a7f7-4408-903f-d2d32009e399","Type":"ContainerDied","Data":"38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675"} Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.404508 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48h46" event={"ID":"a262245c-a7f7-4408-903f-d2d32009e399","Type":"ContainerDied","Data":"c4a3ea92187f8cb6a5628e47b2618748f6a14b6faf966846eb9d1dc7aa27408a"} Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.404529 4750 scope.go:117] "RemoveContainer" containerID="38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.404571 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48h46" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.430512 4750 scope.go:117] "RemoveContainer" containerID="01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.464339 4750 scope.go:117] "RemoveContainer" containerID="2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.464566 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48h46"] Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.480092 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-48h46"] Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.536168 4750 scope.go:117] "RemoveContainer" containerID="38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675" Feb 14 14:38:03 crc kubenswrapper[4750]: E0214 14:38:03.536502 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675\": container with ID starting with 38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675 not found: ID does not exist" containerID="38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.536532 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675"} err="failed to get container status \"38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675\": rpc error: code = NotFound desc = could not find container \"38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675\": container with ID starting with 38a5548167df0c8ee2410c43064a4712ce56b6a1bd0b737451c11b84033db675 not found: ID does not exist" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.536557 4750 scope.go:117] "RemoveContainer" containerID="01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be" Feb 14 14:38:03 crc kubenswrapper[4750]: E0214 14:38:03.536958 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be\": container with ID starting with 01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be not found: ID does not exist" containerID="01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.537002 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be"} err="failed to get container status \"01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be\": rpc error: code = NotFound desc = could not find container \"01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be\": container with ID starting with 01834372c9615c5e5f5d94878bfe07b1612a0ea5d42815434e08706490f775be not found: ID does not exist" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.537032 4750 scope.go:117] "RemoveContainer" containerID="2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe" Feb 14 14:38:03 crc kubenswrapper[4750]: E0214 14:38:03.537378 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe\": container with ID starting with 2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe not found: ID does not exist" containerID="2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe" Feb 14 14:38:03 crc kubenswrapper[4750]: I0214 14:38:03.537449 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe"} err="failed to get container status \"2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe\": rpc error: code = NotFound desc = could not find container \"2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe\": container with ID starting with 2f42e1c3781a50b9378131d6d6d858073262450324b36124cb6bac62dea0babe not found: ID does not exist" Feb 14 14:38:04 crc kubenswrapper[4750]: I0214 14:38:04.762310 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a262245c-a7f7-4408-903f-d2d32009e399" path="/var/lib/kubelet/pods/a262245c-a7f7-4408-903f-d2d32009e399/volumes" Feb 14 14:38:30 crc kubenswrapper[4750]: I0214 14:38:30.129759 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:38:30 crc kubenswrapper[4750]: I0214 14:38:30.130438 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:38:30 crc kubenswrapper[4750]: I0214 14:38:30.130508 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:38:30 crc kubenswrapper[4750]: I0214 14:38:30.131677 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69c59495d37779ccc5f344088d3c0e426e25852bfc1efeee9621dc4500593e17"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:38:30 crc kubenswrapper[4750]: I0214 14:38:30.131751 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://69c59495d37779ccc5f344088d3c0e426e25852bfc1efeee9621dc4500593e17" gracePeriod=600 Feb 14 14:38:30 crc kubenswrapper[4750]: I0214 14:38:30.745662 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="69c59495d37779ccc5f344088d3c0e426e25852bfc1efeee9621dc4500593e17" exitCode=0 Feb 14 14:38:30 crc kubenswrapper[4750]: I0214 14:38:30.755503 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"69c59495d37779ccc5f344088d3c0e426e25852bfc1efeee9621dc4500593e17"} Feb 14 14:38:30 crc kubenswrapper[4750]: I0214 14:38:30.755746 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e"} Feb 14 14:38:30 crc kubenswrapper[4750]: I0214 14:38:30.755828 4750 scope.go:117] "RemoveContainer" containerID="1430d5395edf3a369306a02e4359dbfdcce97d822265559cd2d54ab09ac659be" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.117132 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-98xfw"] Feb 14 14:39:20 crc kubenswrapper[4750]: E0214 14:39:20.118134 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a262245c-a7f7-4408-903f-d2d32009e399" containerName="extract-content" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.118151 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a262245c-a7f7-4408-903f-d2d32009e399" containerName="extract-content" Feb 14 14:39:20 crc kubenswrapper[4750]: E0214 14:39:20.118172 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a262245c-a7f7-4408-903f-d2d32009e399" containerName="extract-utilities" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.118180 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a262245c-a7f7-4408-903f-d2d32009e399" containerName="extract-utilities" Feb 14 14:39:20 crc kubenswrapper[4750]: E0214 14:39:20.118222 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a262245c-a7f7-4408-903f-d2d32009e399" containerName="registry-server" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.118230 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a262245c-a7f7-4408-903f-d2d32009e399" containerName="registry-server" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.118517 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="a262245c-a7f7-4408-903f-d2d32009e399" containerName="registry-server" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.120797 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.147069 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98xfw"] Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.214202 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-catalog-content\") pod \"community-operators-98xfw\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.214277 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cdj\" (UniqueName: \"kubernetes.io/projected/44e2efb5-601c-452d-8d78-2e99f09fccb2-kube-api-access-64cdj\") pod \"community-operators-98xfw\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.214306 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-utilities\") pod \"community-operators-98xfw\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.317145 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-catalog-content\") pod \"community-operators-98xfw\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.317298 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64cdj\" (UniqueName: \"kubernetes.io/projected/44e2efb5-601c-452d-8d78-2e99f09fccb2-kube-api-access-64cdj\") pod \"community-operators-98xfw\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.317353 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-utilities\") pod \"community-operators-98xfw\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.317651 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-catalog-content\") pod \"community-operators-98xfw\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.318037 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-utilities\") pod \"community-operators-98xfw\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.347269 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64cdj\" (UniqueName: \"kubernetes.io/projected/44e2efb5-601c-452d-8d78-2e99f09fccb2-kube-api-access-64cdj\") pod \"community-operators-98xfw\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:20 crc kubenswrapper[4750]: I0214 14:39:20.448700 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:21 crc kubenswrapper[4750]: I0214 14:39:21.120781 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98xfw"] Feb 14 14:39:21 crc kubenswrapper[4750]: W0214 14:39:21.140524 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44e2efb5_601c_452d_8d78_2e99f09fccb2.slice/crio-6993a83a1c39ea767946900dc0b95ee7ac7fe45752a13f297eb2f6300f09fa58 WatchSource:0}: Error finding container 6993a83a1c39ea767946900dc0b95ee7ac7fe45752a13f297eb2f6300f09fa58: Status 404 returned error can't find the container with id 6993a83a1c39ea767946900dc0b95ee7ac7fe45752a13f297eb2f6300f09fa58 Feb 14 14:39:21 crc kubenswrapper[4750]: I0214 14:39:21.423904 4750 generic.go:334] "Generic (PLEG): container finished" podID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerID="1e1dcd8194b6050a183b0a599a59e0793e1420e4cbcb1e244303d34841638370" exitCode=0 Feb 14 14:39:21 crc kubenswrapper[4750]: I0214 14:39:21.423979 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98xfw" event={"ID":"44e2efb5-601c-452d-8d78-2e99f09fccb2","Type":"ContainerDied","Data":"1e1dcd8194b6050a183b0a599a59e0793e1420e4cbcb1e244303d34841638370"} Feb 14 14:39:21 crc kubenswrapper[4750]: I0214 14:39:21.424373 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98xfw" event={"ID":"44e2efb5-601c-452d-8d78-2e99f09fccb2","Type":"ContainerStarted","Data":"6993a83a1c39ea767946900dc0b95ee7ac7fe45752a13f297eb2f6300f09fa58"} Feb 14 14:39:22 crc kubenswrapper[4750]: I0214 14:39:22.443445 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98xfw" event={"ID":"44e2efb5-601c-452d-8d78-2e99f09fccb2","Type":"ContainerStarted","Data":"d0c48fe6de8ba9b158fb1c06cfb1f9e9e9f360235bbd9cc0b960c414b94d1f63"} Feb 14 14:39:24 crc kubenswrapper[4750]: I0214 14:39:24.470353 4750 generic.go:334] "Generic (PLEG): container finished" podID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerID="d0c48fe6de8ba9b158fb1c06cfb1f9e9e9f360235bbd9cc0b960c414b94d1f63" exitCode=0 Feb 14 14:39:24 crc kubenswrapper[4750]: I0214 14:39:24.470446 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98xfw" event={"ID":"44e2efb5-601c-452d-8d78-2e99f09fccb2","Type":"ContainerDied","Data":"d0c48fe6de8ba9b158fb1c06cfb1f9e9e9f360235bbd9cc0b960c414b94d1f63"} Feb 14 14:39:25 crc kubenswrapper[4750]: I0214 14:39:25.488262 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98xfw" event={"ID":"44e2efb5-601c-452d-8d78-2e99f09fccb2","Type":"ContainerStarted","Data":"80001dcab7351a5f63aee46d28a483b0c48de426ab8d3798a2df212c1dd08717"} Feb 14 14:39:25 crc kubenswrapper[4750]: I0214 14:39:25.515742 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-98xfw" podStartSLOduration=2.067558173 podStartE2EDuration="5.515721975s" podCreationTimestamp="2026-02-14 14:39:20 +0000 UTC" firstStartedPulling="2026-02-14 14:39:21.426850309 +0000 UTC m=+2833.452839790" lastFinishedPulling="2026-02-14 14:39:24.875014111 +0000 UTC m=+2836.901003592" observedRunningTime="2026-02-14 14:39:25.513704608 +0000 UTC m=+2837.539694089" watchObservedRunningTime="2026-02-14 14:39:25.515721975 +0000 UTC m=+2837.541711456" Feb 14 14:39:30 crc kubenswrapper[4750]: I0214 14:39:30.448846 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:30 crc kubenswrapper[4750]: I0214 14:39:30.449374 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:30 crc kubenswrapper[4750]: I0214 14:39:30.508823 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:30 crc kubenswrapper[4750]: I0214 14:39:30.596366 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:30 crc kubenswrapper[4750]: I0214 14:39:30.759632 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98xfw"] Feb 14 14:39:32 crc kubenswrapper[4750]: I0214 14:39:32.572904 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-98xfw" podUID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerName="registry-server" containerID="cri-o://80001dcab7351a5f63aee46d28a483b0c48de426ab8d3798a2df212c1dd08717" gracePeriod=2 Feb 14 14:39:33 crc kubenswrapper[4750]: I0214 14:39:33.594463 4750 generic.go:334] "Generic (PLEG): container finished" podID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerID="80001dcab7351a5f63aee46d28a483b0c48de426ab8d3798a2df212c1dd08717" exitCode=0 Feb 14 14:39:33 crc kubenswrapper[4750]: I0214 14:39:33.594555 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98xfw" event={"ID":"44e2efb5-601c-452d-8d78-2e99f09fccb2","Type":"ContainerDied","Data":"80001dcab7351a5f63aee46d28a483b0c48de426ab8d3798a2df212c1dd08717"} Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.446986 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.554617 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64cdj\" (UniqueName: \"kubernetes.io/projected/44e2efb5-601c-452d-8d78-2e99f09fccb2-kube-api-access-64cdj\") pod \"44e2efb5-601c-452d-8d78-2e99f09fccb2\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.555303 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-catalog-content\") pod \"44e2efb5-601c-452d-8d78-2e99f09fccb2\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.555333 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-utilities\") pod \"44e2efb5-601c-452d-8d78-2e99f09fccb2\" (UID: \"44e2efb5-601c-452d-8d78-2e99f09fccb2\") " Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.557068 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-utilities" (OuterVolumeSpecName: "utilities") pod "44e2efb5-601c-452d-8d78-2e99f09fccb2" (UID: "44e2efb5-601c-452d-8d78-2e99f09fccb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.567859 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e2efb5-601c-452d-8d78-2e99f09fccb2-kube-api-access-64cdj" (OuterVolumeSpecName: "kube-api-access-64cdj") pod "44e2efb5-601c-452d-8d78-2e99f09fccb2" (UID: "44e2efb5-601c-452d-8d78-2e99f09fccb2"). InnerVolumeSpecName "kube-api-access-64cdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.626801 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44e2efb5-601c-452d-8d78-2e99f09fccb2" (UID: "44e2efb5-601c-452d-8d78-2e99f09fccb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.627583 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98xfw" event={"ID":"44e2efb5-601c-452d-8d78-2e99f09fccb2","Type":"ContainerDied","Data":"6993a83a1c39ea767946900dc0b95ee7ac7fe45752a13f297eb2f6300f09fa58"} Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.627635 4750 scope.go:117] "RemoveContainer" containerID="80001dcab7351a5f63aee46d28a483b0c48de426ab8d3798a2df212c1dd08717" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.627648 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98xfw" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.650994 4750 scope.go:117] "RemoveContainer" containerID="d0c48fe6de8ba9b158fb1c06cfb1f9e9e9f360235bbd9cc0b960c414b94d1f63" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.658328 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.658498 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2efb5-601c-452d-8d78-2e99f09fccb2-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.658563 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64cdj\" (UniqueName: \"kubernetes.io/projected/44e2efb5-601c-452d-8d78-2e99f09fccb2-kube-api-access-64cdj\") on node \"crc\" DevicePath \"\"" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.670226 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98xfw"] Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.683977 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-98xfw"] Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.690891 4750 scope.go:117] "RemoveContainer" containerID="1e1dcd8194b6050a183b0a599a59e0793e1420e4cbcb1e244303d34841638370" Feb 14 14:39:34 crc kubenswrapper[4750]: I0214 14:39:34.754908 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e2efb5-601c-452d-8d78-2e99f09fccb2" path="/var/lib/kubelet/pods/44e2efb5-601c-452d-8d78-2e99f09fccb2/volumes" Feb 14 14:40:07 crc kubenswrapper[4750]: I0214 14:40:07.056890 4750 generic.go:334] "Generic (PLEG): container finished" podID="66d16d12-f651-4f21-9160-e22496e7e969" containerID="0cc72aaca380fcbff6db89212e1dba9c45d97019a398accc0e3fa1debd65a9c8" exitCode=0 Feb 14 14:40:07 crc kubenswrapper[4750]: I0214 14:40:07.056959 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" event={"ID":"66d16d12-f651-4f21-9160-e22496e7e969","Type":"ContainerDied","Data":"0cc72aaca380fcbff6db89212e1dba9c45d97019a398accc0e3fa1debd65a9c8"} Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.570901 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.763864 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-inventory\") pod \"66d16d12-f651-4f21-9160-e22496e7e969\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.764326 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-telemetry-combined-ca-bundle\") pod \"66d16d12-f651-4f21-9160-e22496e7e969\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.764454 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-0\") pod \"66d16d12-f651-4f21-9160-e22496e7e969\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.764553 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-2\") pod \"66d16d12-f651-4f21-9160-e22496e7e969\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.764660 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99lhn\" (UniqueName: \"kubernetes.io/projected/66d16d12-f651-4f21-9160-e22496e7e969-kube-api-access-99lhn\") pod \"66d16d12-f651-4f21-9160-e22496e7e969\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.764859 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ssh-key-openstack-edpm-ipam\") pod \"66d16d12-f651-4f21-9160-e22496e7e969\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.765141 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-1\") pod \"66d16d12-f651-4f21-9160-e22496e7e969\" (UID: \"66d16d12-f651-4f21-9160-e22496e7e969\") " Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.770987 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d16d12-f651-4f21-9160-e22496e7e969-kube-api-access-99lhn" (OuterVolumeSpecName: "kube-api-access-99lhn") pod "66d16d12-f651-4f21-9160-e22496e7e969" (UID: "66d16d12-f651-4f21-9160-e22496e7e969"). InnerVolumeSpecName "kube-api-access-99lhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.772353 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "66d16d12-f651-4f21-9160-e22496e7e969" (UID: "66d16d12-f651-4f21-9160-e22496e7e969"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.827385 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "66d16d12-f651-4f21-9160-e22496e7e969" (UID: "66d16d12-f651-4f21-9160-e22496e7e969"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.834060 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "66d16d12-f651-4f21-9160-e22496e7e969" (UID: "66d16d12-f651-4f21-9160-e22496e7e969"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.834519 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "66d16d12-f651-4f21-9160-e22496e7e969" (UID: "66d16d12-f651-4f21-9160-e22496e7e969"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.841369 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "66d16d12-f651-4f21-9160-e22496e7e969" (UID: "66d16d12-f651-4f21-9160-e22496e7e969"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.851221 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-inventory" (OuterVolumeSpecName: "inventory") pod "66d16d12-f651-4f21-9160-e22496e7e969" (UID: "66d16d12-f651-4f21-9160-e22496e7e969"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.869012 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.869048 4750 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.869062 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.869071 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.869080 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99lhn\" (UniqueName: \"kubernetes.io/projected/66d16d12-f651-4f21-9160-e22496e7e969-kube-api-access-99lhn\") on node \"crc\" DevicePath \"\"" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.869089 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:40:08 crc kubenswrapper[4750]: I0214 14:40:08.869123 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/66d16d12-f651-4f21-9160-e22496e7e969-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.082539 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" event={"ID":"66d16d12-f651-4f21-9160-e22496e7e969","Type":"ContainerDied","Data":"9623cfb6e04f79e5d936512be0da6eeddbe2d620742372371c8355e09df6fac7"} Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.082891 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9623cfb6e04f79e5d936512be0da6eeddbe2d620742372371c8355e09df6fac7" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.082621 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-58rdb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.217449 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb"] Feb 14 14:40:09 crc kubenswrapper[4750]: E0214 14:40:09.218223 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerName="extract-utilities" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.218254 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerName="extract-utilities" Feb 14 14:40:09 crc kubenswrapper[4750]: E0214 14:40:09.218295 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerName="extract-content" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.218305 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerName="extract-content" Feb 14 14:40:09 crc kubenswrapper[4750]: E0214 14:40:09.218323 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d16d12-f651-4f21-9160-e22496e7e969" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.218333 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d16d12-f651-4f21-9160-e22496e7e969" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 14:40:09 crc kubenswrapper[4750]: E0214 14:40:09.218383 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerName="registry-server" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.218393 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerName="registry-server" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.218744 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e2efb5-601c-452d-8d78-2e99f09fccb2" containerName="registry-server" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.218767 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d16d12-f651-4f21-9160-e22496e7e969" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.220296 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.223068 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.223494 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.223948 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.224221 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.224540 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.237416 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb"] Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.382448 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.382872 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.383434 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.383535 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgm7w\" (UniqueName: \"kubernetes.io/projected/b5799dfb-5d7b-40e0-9187-056a19186b75-kube-api-access-sgm7w\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.383866 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.383926 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.384075 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.485398 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgm7w\" (UniqueName: \"kubernetes.io/projected/b5799dfb-5d7b-40e0-9187-056a19186b75-kube-api-access-sgm7w\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.486010 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.486693 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.486831 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.487318 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.487423 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.487560 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.490820 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.491082 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.510162 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.510449 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.510929 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.516591 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.517706 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgm7w\" (UniqueName: \"kubernetes.io/projected/b5799dfb-5d7b-40e0-9187-056a19186b75-kube-api-access-sgm7w\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:09 crc kubenswrapper[4750]: I0214 14:40:09.551150 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:40:10 crc kubenswrapper[4750]: I0214 14:40:10.189507 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb"] Feb 14 14:40:11 crc kubenswrapper[4750]: I0214 14:40:11.108956 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" event={"ID":"b5799dfb-5d7b-40e0-9187-056a19186b75","Type":"ContainerStarted","Data":"d025b2265cc3f337e41602591b502934ea68264791f36d8a627559f161a05ba8"} Feb 14 14:40:11 crc kubenswrapper[4750]: I0214 14:40:11.109480 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" event={"ID":"b5799dfb-5d7b-40e0-9187-056a19186b75","Type":"ContainerStarted","Data":"8c9cd7badaf13f7f5721ee58382361703f91faca317121c60d9973cf2c79847a"} Feb 14 14:40:11 crc kubenswrapper[4750]: I0214 14:40:11.131275 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" podStartSLOduration=1.656955889 podStartE2EDuration="2.131253646s" podCreationTimestamp="2026-02-14 14:40:09 +0000 UTC" firstStartedPulling="2026-02-14 14:40:10.195770246 +0000 UTC m=+2882.221759737" lastFinishedPulling="2026-02-14 14:40:10.670068013 +0000 UTC m=+2882.696057494" observedRunningTime="2026-02-14 14:40:11.130596137 +0000 UTC m=+2883.156585638" watchObservedRunningTime="2026-02-14 14:40:11.131253646 +0000 UTC m=+2883.157243127" Feb 14 14:40:30 crc kubenswrapper[4750]: I0214 14:40:30.129071 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:40:30 crc kubenswrapper[4750]: I0214 14:40:30.130007 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:41:00 crc kubenswrapper[4750]: I0214 14:41:00.128794 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:41:00 crc kubenswrapper[4750]: I0214 14:41:00.129428 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:41:30 crc kubenswrapper[4750]: I0214 14:41:30.128943 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:41:30 crc kubenswrapper[4750]: I0214 14:41:30.129547 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:41:30 crc kubenswrapper[4750]: I0214 14:41:30.129596 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:41:30 crc kubenswrapper[4750]: I0214 14:41:30.130459 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:41:30 crc kubenswrapper[4750]: I0214 14:41:30.130554 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" gracePeriod=600 Feb 14 14:41:30 crc kubenswrapper[4750]: E0214 14:41:30.270684 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:41:31 crc kubenswrapper[4750]: I0214 14:41:31.143179 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" exitCode=0 Feb 14 14:41:31 crc kubenswrapper[4750]: I0214 14:41:31.145826 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e"} Feb 14 14:41:31 crc kubenswrapper[4750]: I0214 14:41:31.145976 4750 scope.go:117] "RemoveContainer" containerID="69c59495d37779ccc5f344088d3c0e426e25852bfc1efeee9621dc4500593e17" Feb 14 14:41:31 crc kubenswrapper[4750]: I0214 14:41:31.146580 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:41:31 crc kubenswrapper[4750]: E0214 14:41:31.147256 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:41:41 crc kubenswrapper[4750]: I0214 14:41:41.742960 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:41:41 crc kubenswrapper[4750]: E0214 14:41:41.744449 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:41:53 crc kubenswrapper[4750]: I0214 14:41:53.742332 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:41:53 crc kubenswrapper[4750]: E0214 14:41:53.743157 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:42:05 crc kubenswrapper[4750]: I0214 14:42:05.742205 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:42:05 crc kubenswrapper[4750]: E0214 14:42:05.743766 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:42:12 crc kubenswrapper[4750]: I0214 14:42:12.651530 4750 generic.go:334] "Generic (PLEG): container finished" podID="b5799dfb-5d7b-40e0-9187-056a19186b75" containerID="d025b2265cc3f337e41602591b502934ea68264791f36d8a627559f161a05ba8" exitCode=0 Feb 14 14:42:12 crc kubenswrapper[4750]: I0214 14:42:12.651752 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" event={"ID":"b5799dfb-5d7b-40e0-9187-056a19186b75","Type":"ContainerDied","Data":"d025b2265cc3f337e41602591b502934ea68264791f36d8a627559f161a05ba8"} Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.133657 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.164449 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-1\") pod \"b5799dfb-5d7b-40e0-9187-056a19186b75\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.164567 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-telemetry-power-monitoring-combined-ca-bundle\") pod \"b5799dfb-5d7b-40e0-9187-056a19186b75\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.164617 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-inventory\") pod \"b5799dfb-5d7b-40e0-9187-056a19186b75\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.164708 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-2\") pod \"b5799dfb-5d7b-40e0-9187-056a19186b75\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.164812 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-0\") pod \"b5799dfb-5d7b-40e0-9187-056a19186b75\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.165017 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ssh-key-openstack-edpm-ipam\") pod \"b5799dfb-5d7b-40e0-9187-056a19186b75\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.165093 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgm7w\" (UniqueName: \"kubernetes.io/projected/b5799dfb-5d7b-40e0-9187-056a19186b75-kube-api-access-sgm7w\") pod \"b5799dfb-5d7b-40e0-9187-056a19186b75\" (UID: \"b5799dfb-5d7b-40e0-9187-056a19186b75\") " Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.171229 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5799dfb-5d7b-40e0-9187-056a19186b75-kube-api-access-sgm7w" (OuterVolumeSpecName: "kube-api-access-sgm7w") pod "b5799dfb-5d7b-40e0-9187-056a19186b75" (UID: "b5799dfb-5d7b-40e0-9187-056a19186b75"). InnerVolumeSpecName "kube-api-access-sgm7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.171717 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "b5799dfb-5d7b-40e0-9187-056a19186b75" (UID: "b5799dfb-5d7b-40e0-9187-056a19186b75"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.205373 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "b5799dfb-5d7b-40e0-9187-056a19186b75" (UID: "b5799dfb-5d7b-40e0-9187-056a19186b75"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.207150 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-inventory" (OuterVolumeSpecName: "inventory") pod "b5799dfb-5d7b-40e0-9187-056a19186b75" (UID: "b5799dfb-5d7b-40e0-9187-056a19186b75"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.208423 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5799dfb-5d7b-40e0-9187-056a19186b75" (UID: "b5799dfb-5d7b-40e0-9187-056a19186b75"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.208801 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "b5799dfb-5d7b-40e0-9187-056a19186b75" (UID: "b5799dfb-5d7b-40e0-9187-056a19186b75"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.213506 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "b5799dfb-5d7b-40e0-9187-056a19186b75" (UID: "b5799dfb-5d7b-40e0-9187-056a19186b75"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.268146 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.268186 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgm7w\" (UniqueName: \"kubernetes.io/projected/b5799dfb-5d7b-40e0-9187-056a19186b75-kube-api-access-sgm7w\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.268213 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.268225 4750 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.268254 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.268263 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.268271 4750 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/b5799dfb-5d7b-40e0-9187-056a19186b75-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.675249 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" event={"ID":"b5799dfb-5d7b-40e0-9187-056a19186b75","Type":"ContainerDied","Data":"8c9cd7badaf13f7f5721ee58382361703f91faca317121c60d9973cf2c79847a"} Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.675292 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9cd7badaf13f7f5721ee58382361703f91faca317121c60d9973cf2c79847a" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.675298 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.798567 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns"] Feb 14 14:42:14 crc kubenswrapper[4750]: E0214 14:42:14.799164 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5799dfb-5d7b-40e0-9187-056a19186b75" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.799186 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5799dfb-5d7b-40e0-9187-056a19186b75" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.799462 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5799dfb-5d7b-40e0-9187-056a19186b75" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.800284 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.803460 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.803833 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.804054 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tbck5" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.804358 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.804500 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.816583 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns"] Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.885662 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.885709 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.885975 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.886046 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.886096 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xtsp\" (UniqueName: \"kubernetes.io/projected/6f331906-e9ac-4779-b6a3-28ac233ab472-kube-api-access-6xtsp\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.987878 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.987944 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.988152 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.988192 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.988248 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xtsp\" (UniqueName: \"kubernetes.io/projected/6f331906-e9ac-4779-b6a3-28ac233ab472-kube-api-access-6xtsp\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.997571 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:14 crc kubenswrapper[4750]: I0214 14:42:14.997803 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:15 crc kubenswrapper[4750]: I0214 14:42:15.002276 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:15 crc kubenswrapper[4750]: I0214 14:42:15.009097 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:15 crc kubenswrapper[4750]: I0214 14:42:15.033707 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xtsp\" (UniqueName: \"kubernetes.io/projected/6f331906-e9ac-4779-b6a3-28ac233ab472-kube-api-access-6xtsp\") pod \"logging-edpm-deployment-openstack-edpm-ipam-cdxns\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:15 crc kubenswrapper[4750]: I0214 14:42:15.131978 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:15 crc kubenswrapper[4750]: I0214 14:42:15.756679 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns"] Feb 14 14:42:16 crc kubenswrapper[4750]: I0214 14:42:16.704428 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" event={"ID":"6f331906-e9ac-4779-b6a3-28ac233ab472","Type":"ContainerStarted","Data":"92cc61a78b1c0ee27ed38f4a8e257553da4099b8cd06007c250a707918cacb22"} Feb 14 14:42:16 crc kubenswrapper[4750]: I0214 14:42:16.704943 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" event={"ID":"6f331906-e9ac-4779-b6a3-28ac233ab472","Type":"ContainerStarted","Data":"d20f5f15d9a068aa781612a58f8c3f8bd899a6070856dcf0eacc154c3888c732"} Feb 14 14:42:16 crc kubenswrapper[4750]: I0214 14:42:16.728252 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" podStartSLOduration=2.264880795 podStartE2EDuration="2.728218829s" podCreationTimestamp="2026-02-14 14:42:14 +0000 UTC" firstStartedPulling="2026-02-14 14:42:15.763894775 +0000 UTC m=+3007.789884286" lastFinishedPulling="2026-02-14 14:42:16.227232829 +0000 UTC m=+3008.253222320" observedRunningTime="2026-02-14 14:42:16.719204261 +0000 UTC m=+3008.745193782" watchObservedRunningTime="2026-02-14 14:42:16.728218829 +0000 UTC m=+3008.754208350" Feb 14 14:42:17 crc kubenswrapper[4750]: I0214 14:42:17.742492 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:42:17 crc kubenswrapper[4750]: E0214 14:42:17.743292 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:42:29 crc kubenswrapper[4750]: I0214 14:42:29.742587 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:42:29 crc kubenswrapper[4750]: E0214 14:42:29.743543 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:42:31 crc kubenswrapper[4750]: I0214 14:42:31.886064 4750 generic.go:334] "Generic (PLEG): container finished" podID="6f331906-e9ac-4779-b6a3-28ac233ab472" containerID="92cc61a78b1c0ee27ed38f4a8e257553da4099b8cd06007c250a707918cacb22" exitCode=0 Feb 14 14:42:31 crc kubenswrapper[4750]: I0214 14:42:31.886155 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" event={"ID":"6f331906-e9ac-4779-b6a3-28ac233ab472","Type":"ContainerDied","Data":"92cc61a78b1c0ee27ed38f4a8e257553da4099b8cd06007c250a707918cacb22"} Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.371708 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.481155 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-0\") pod \"6f331906-e9ac-4779-b6a3-28ac233ab472\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.481664 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xtsp\" (UniqueName: \"kubernetes.io/projected/6f331906-e9ac-4779-b6a3-28ac233ab472-kube-api-access-6xtsp\") pod \"6f331906-e9ac-4779-b6a3-28ac233ab472\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.481690 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-ssh-key-openstack-edpm-ipam\") pod \"6f331906-e9ac-4779-b6a3-28ac233ab472\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.481715 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-inventory\") pod \"6f331906-e9ac-4779-b6a3-28ac233ab472\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.481937 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-1\") pod \"6f331906-e9ac-4779-b6a3-28ac233ab472\" (UID: \"6f331906-e9ac-4779-b6a3-28ac233ab472\") " Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.519930 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f331906-e9ac-4779-b6a3-28ac233ab472-kube-api-access-6xtsp" (OuterVolumeSpecName: "kube-api-access-6xtsp") pod "6f331906-e9ac-4779-b6a3-28ac233ab472" (UID: "6f331906-e9ac-4779-b6a3-28ac233ab472"). InnerVolumeSpecName "kube-api-access-6xtsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.527791 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "6f331906-e9ac-4779-b6a3-28ac233ab472" (UID: "6f331906-e9ac-4779-b6a3-28ac233ab472"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.532233 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "6f331906-e9ac-4779-b6a3-28ac233ab472" (UID: "6f331906-e9ac-4779-b6a3-28ac233ab472"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.539340 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-inventory" (OuterVolumeSpecName: "inventory") pod "6f331906-e9ac-4779-b6a3-28ac233ab472" (UID: "6f331906-e9ac-4779-b6a3-28ac233ab472"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.541269 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6f331906-e9ac-4779-b6a3-28ac233ab472" (UID: "6f331906-e9ac-4779-b6a3-28ac233ab472"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.584379 4750 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.584412 4750 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.584424 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xtsp\" (UniqueName: \"kubernetes.io/projected/6f331906-e9ac-4779-b6a3-28ac233ab472-kube-api-access-6xtsp\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.584434 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.584443 4750 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f331906-e9ac-4779-b6a3-28ac233ab472-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.909930 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" event={"ID":"6f331906-e9ac-4779-b6a3-28ac233ab472","Type":"ContainerDied","Data":"d20f5f15d9a068aa781612a58f8c3f8bd899a6070856dcf0eacc154c3888c732"} Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.909978 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d20f5f15d9a068aa781612a58f8c3f8bd899a6070856dcf0eacc154c3888c732" Feb 14 14:42:33 crc kubenswrapper[4750]: I0214 14:42:33.910037 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-cdxns" Feb 14 14:42:42 crc kubenswrapper[4750]: I0214 14:42:42.744098 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:42:42 crc kubenswrapper[4750]: E0214 14:42:42.744896 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:42:55 crc kubenswrapper[4750]: I0214 14:42:55.742905 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:42:55 crc kubenswrapper[4750]: E0214 14:42:55.744586 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:43:08 crc kubenswrapper[4750]: I0214 14:43:08.755079 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:43:08 crc kubenswrapper[4750]: E0214 14:43:08.756542 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:43:22 crc kubenswrapper[4750]: I0214 14:43:22.743420 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:43:22 crc kubenswrapper[4750]: E0214 14:43:22.744485 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:43:35 crc kubenswrapper[4750]: I0214 14:43:35.743472 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:43:35 crc kubenswrapper[4750]: E0214 14:43:35.744291 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:43:48 crc kubenswrapper[4750]: I0214 14:43:48.752634 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:43:48 crc kubenswrapper[4750]: E0214 14:43:48.753384 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:43:59 crc kubenswrapper[4750]: I0214 14:43:59.742460 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:43:59 crc kubenswrapper[4750]: E0214 14:43:59.743579 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:44:13 crc kubenswrapper[4750]: I0214 14:44:13.741725 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:44:13 crc kubenswrapper[4750]: E0214 14:44:13.742582 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:44:24 crc kubenswrapper[4750]: I0214 14:44:24.265894 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:44:24 crc kubenswrapper[4750]: E0214 14:44:24.270287 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:44:37 crc kubenswrapper[4750]: I0214 14:44:37.743106 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:44:37 crc kubenswrapper[4750]: E0214 14:44:37.744163 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:44:52 crc kubenswrapper[4750]: I0214 14:44:52.742507 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:44:52 crc kubenswrapper[4750]: E0214 14:44:52.743520 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.180912 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8"] Feb 14 14:45:00 crc kubenswrapper[4750]: E0214 14:45:00.182401 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f331906-e9ac-4779-b6a3-28ac233ab472" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.182434 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f331906-e9ac-4779-b6a3-28ac233ab472" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.182938 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f331906-e9ac-4779-b6a3-28ac233ab472" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.184437 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.189286 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.190812 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.203627 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8"] Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.319620 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p6v6\" (UniqueName: \"kubernetes.io/projected/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-kube-api-access-7p6v6\") pod \"collect-profiles-29518005-dcqc8\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.319878 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-config-volume\") pod \"collect-profiles-29518005-dcqc8\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.320047 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-secret-volume\") pod \"collect-profiles-29518005-dcqc8\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.422274 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p6v6\" (UniqueName: \"kubernetes.io/projected/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-kube-api-access-7p6v6\") pod \"collect-profiles-29518005-dcqc8\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.422330 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-config-volume\") pod \"collect-profiles-29518005-dcqc8\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.422386 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-secret-volume\") pod \"collect-profiles-29518005-dcqc8\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.423650 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-config-volume\") pod \"collect-profiles-29518005-dcqc8\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.427787 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-secret-volume\") pod \"collect-profiles-29518005-dcqc8\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.439573 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p6v6\" (UniqueName: \"kubernetes.io/projected/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-kube-api-access-7p6v6\") pod \"collect-profiles-29518005-dcqc8\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:00 crc kubenswrapper[4750]: I0214 14:45:00.520574 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:01 crc kubenswrapper[4750]: I0214 14:45:01.022627 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8"] Feb 14 14:45:01 crc kubenswrapper[4750]: I0214 14:45:01.781438 4750 generic.go:334] "Generic (PLEG): container finished" podID="6fc27cc7-9845-4f76-8ce6-a01f84cf2a41" containerID="9b3cc0337be7519e490883c0572c7d890d875875762cffebb60404dbe77847d5" exitCode=0 Feb 14 14:45:01 crc kubenswrapper[4750]: I0214 14:45:01.781510 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" event={"ID":"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41","Type":"ContainerDied","Data":"9b3cc0337be7519e490883c0572c7d890d875875762cffebb60404dbe77847d5"} Feb 14 14:45:01 crc kubenswrapper[4750]: I0214 14:45:01.781791 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" event={"ID":"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41","Type":"ContainerStarted","Data":"781acd24bf505281455f46f9e1cc0a1ad82881a4b07db399bcec0231c3ad8239"} Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.233893 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.409987 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p6v6\" (UniqueName: \"kubernetes.io/projected/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-kube-api-access-7p6v6\") pod \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.410269 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-secret-volume\") pod \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.410389 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-config-volume\") pod \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\" (UID: \"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41\") " Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.411029 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fc27cc7-9845-4f76-8ce6-a01f84cf2a41" (UID: "6fc27cc7-9845-4f76-8ce6-a01f84cf2a41"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.411280 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.416087 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-kube-api-access-7p6v6" (OuterVolumeSpecName: "kube-api-access-7p6v6") pod "6fc27cc7-9845-4f76-8ce6-a01f84cf2a41" (UID: "6fc27cc7-9845-4f76-8ce6-a01f84cf2a41"). InnerVolumeSpecName "kube-api-access-7p6v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.416262 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6fc27cc7-9845-4f76-8ce6-a01f84cf2a41" (UID: "6fc27cc7-9845-4f76-8ce6-a01f84cf2a41"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.512988 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p6v6\" (UniqueName: \"kubernetes.io/projected/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-kube-api-access-7p6v6\") on node \"crc\" DevicePath \"\"" Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.513198 4750 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.816390 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" event={"ID":"6fc27cc7-9845-4f76-8ce6-a01f84cf2a41","Type":"ContainerDied","Data":"781acd24bf505281455f46f9e1cc0a1ad82881a4b07db399bcec0231c3ad8239"} Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.816721 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781acd24bf505281455f46f9e1cc0a1ad82881a4b07db399bcec0231c3ad8239" Feb 14 14:45:03 crc kubenswrapper[4750]: I0214 14:45:03.816501 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8" Feb 14 14:45:04 crc kubenswrapper[4750]: I0214 14:45:04.345620 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq"] Feb 14 14:45:04 crc kubenswrapper[4750]: I0214 14:45:04.361106 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517960-4gmzq"] Feb 14 14:45:04 crc kubenswrapper[4750]: I0214 14:45:04.760629 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160d1cb0-fb48-404e-aec8-e1e38de67897" path="/var/lib/kubelet/pods/160d1cb0-fb48-404e-aec8-e1e38de67897/volumes" Feb 14 14:45:05 crc kubenswrapper[4750]: I0214 14:45:05.742298 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:45:05 crc kubenswrapper[4750]: E0214 14:45:05.742810 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:45:17 crc kubenswrapper[4750]: I0214 14:45:17.742293 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:45:17 crc kubenswrapper[4750]: E0214 14:45:17.744447 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:45:27 crc kubenswrapper[4750]: I0214 14:45:27.344094 4750 scope.go:117] "RemoveContainer" containerID="ff6bce62424cbdb81d86c990608e2c8fe724b78495b862c8fc586e0ec59ed440" Feb 14 14:45:30 crc kubenswrapper[4750]: I0214 14:45:30.742358 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:45:30 crc kubenswrapper[4750]: E0214 14:45:30.743286 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:45:31 crc kubenswrapper[4750]: I0214 14:45:31.977794 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7f4d2"] Feb 14 14:45:31 crc kubenswrapper[4750]: E0214 14:45:31.979377 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc27cc7-9845-4f76-8ce6-a01f84cf2a41" containerName="collect-profiles" Feb 14 14:45:31 crc kubenswrapper[4750]: I0214 14:45:31.979402 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc27cc7-9845-4f76-8ce6-a01f84cf2a41" containerName="collect-profiles" Feb 14 14:45:31 crc kubenswrapper[4750]: I0214 14:45:31.980055 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc27cc7-9845-4f76-8ce6-a01f84cf2a41" containerName="collect-profiles" Feb 14 14:45:31 crc kubenswrapper[4750]: I0214 14:45:31.989570 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.034617 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7f4d2"] Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.171354 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs9xs\" (UniqueName: \"kubernetes.io/projected/6dde3c6b-a670-47da-88e0-26eae37ebf9d-kube-api-access-gs9xs\") pod \"certified-operators-7f4d2\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.171546 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-utilities\") pod \"certified-operators-7f4d2\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.171608 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-catalog-content\") pod \"certified-operators-7f4d2\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.274407 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-catalog-content\") pod \"certified-operators-7f4d2\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.274665 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs9xs\" (UniqueName: \"kubernetes.io/projected/6dde3c6b-a670-47da-88e0-26eae37ebf9d-kube-api-access-gs9xs\") pod \"certified-operators-7f4d2\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.274886 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-utilities\") pod \"certified-operators-7f4d2\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.274930 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-catalog-content\") pod \"certified-operators-7f4d2\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.275362 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-utilities\") pod \"certified-operators-7f4d2\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.295958 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs9xs\" (UniqueName: \"kubernetes.io/projected/6dde3c6b-a670-47da-88e0-26eae37ebf9d-kube-api-access-gs9xs\") pod \"certified-operators-7f4d2\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.331250 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:32 crc kubenswrapper[4750]: I0214 14:45:32.867832 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7f4d2"] Feb 14 14:45:33 crc kubenswrapper[4750]: I0214 14:45:33.174674 4750 generic.go:334] "Generic (PLEG): container finished" podID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerID="4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1" exitCode=0 Feb 14 14:45:33 crc kubenswrapper[4750]: I0214 14:45:33.174730 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7f4d2" event={"ID":"6dde3c6b-a670-47da-88e0-26eae37ebf9d","Type":"ContainerDied","Data":"4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1"} Feb 14 14:45:33 crc kubenswrapper[4750]: I0214 14:45:33.175036 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7f4d2" event={"ID":"6dde3c6b-a670-47da-88e0-26eae37ebf9d","Type":"ContainerStarted","Data":"46b1c2b154944ba6b618e8f323555161cfd39ed7e86cc0bce3696838f238d38b"} Feb 14 14:45:33 crc kubenswrapper[4750]: I0214 14:45:33.178916 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:45:35 crc kubenswrapper[4750]: I0214 14:45:35.204550 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7f4d2" event={"ID":"6dde3c6b-a670-47da-88e0-26eae37ebf9d","Type":"ContainerStarted","Data":"3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e"} Feb 14 14:45:37 crc kubenswrapper[4750]: I0214 14:45:37.229512 4750 generic.go:334] "Generic (PLEG): container finished" podID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerID="3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e" exitCode=0 Feb 14 14:45:37 crc kubenswrapper[4750]: I0214 14:45:37.229569 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7f4d2" event={"ID":"6dde3c6b-a670-47da-88e0-26eae37ebf9d","Type":"ContainerDied","Data":"3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e"} Feb 14 14:45:38 crc kubenswrapper[4750]: I0214 14:45:38.254241 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7f4d2" event={"ID":"6dde3c6b-a670-47da-88e0-26eae37ebf9d","Type":"ContainerStarted","Data":"cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6"} Feb 14 14:45:38 crc kubenswrapper[4750]: I0214 14:45:38.278040 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7f4d2" podStartSLOduration=2.81939723 podStartE2EDuration="7.278016385s" podCreationTimestamp="2026-02-14 14:45:31 +0000 UTC" firstStartedPulling="2026-02-14 14:45:33.178687406 +0000 UTC m=+3205.204676887" lastFinishedPulling="2026-02-14 14:45:37.637306551 +0000 UTC m=+3209.663296042" observedRunningTime="2026-02-14 14:45:38.276144852 +0000 UTC m=+3210.302134363" watchObservedRunningTime="2026-02-14 14:45:38.278016385 +0000 UTC m=+3210.304005876" Feb 14 14:45:42 crc kubenswrapper[4750]: I0214 14:45:42.332202 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:42 crc kubenswrapper[4750]: I0214 14:45:42.333368 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:42 crc kubenswrapper[4750]: I0214 14:45:42.412518 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:42 crc kubenswrapper[4750]: I0214 14:45:42.743098 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:45:42 crc kubenswrapper[4750]: E0214 14:45:42.743636 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:45:43 crc kubenswrapper[4750]: I0214 14:45:43.359314 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:43 crc kubenswrapper[4750]: I0214 14:45:43.423591 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7f4d2"] Feb 14 14:45:45 crc kubenswrapper[4750]: I0214 14:45:45.338105 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7f4d2" podUID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerName="registry-server" containerID="cri-o://cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6" gracePeriod=2 Feb 14 14:45:45 crc kubenswrapper[4750]: I0214 14:45:45.915000 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:45 crc kubenswrapper[4750]: I0214 14:45:45.973512 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs9xs\" (UniqueName: \"kubernetes.io/projected/6dde3c6b-a670-47da-88e0-26eae37ebf9d-kube-api-access-gs9xs\") pod \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " Feb 14 14:45:45 crc kubenswrapper[4750]: I0214 14:45:45.973712 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-catalog-content\") pod \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " Feb 14 14:45:45 crc kubenswrapper[4750]: I0214 14:45:45.973975 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-utilities\") pod \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\" (UID: \"6dde3c6b-a670-47da-88e0-26eae37ebf9d\") " Feb 14 14:45:45 crc kubenswrapper[4750]: I0214 14:45:45.975953 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-utilities" (OuterVolumeSpecName: "utilities") pod "6dde3c6b-a670-47da-88e0-26eae37ebf9d" (UID: "6dde3c6b-a670-47da-88e0-26eae37ebf9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:45:45 crc kubenswrapper[4750]: I0214 14:45:45.985477 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dde3c6b-a670-47da-88e0-26eae37ebf9d-kube-api-access-gs9xs" (OuterVolumeSpecName: "kube-api-access-gs9xs") pod "6dde3c6b-a670-47da-88e0-26eae37ebf9d" (UID: "6dde3c6b-a670-47da-88e0-26eae37ebf9d"). InnerVolumeSpecName "kube-api-access-gs9xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.046679 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dde3c6b-a670-47da-88e0-26eae37ebf9d" (UID: "6dde3c6b-a670-47da-88e0-26eae37ebf9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.078097 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.078186 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs9xs\" (UniqueName: \"kubernetes.io/projected/6dde3c6b-a670-47da-88e0-26eae37ebf9d-kube-api-access-gs9xs\") on node \"crc\" DevicePath \"\"" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.078209 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde3c6b-a670-47da-88e0-26eae37ebf9d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.349789 4750 generic.go:334] "Generic (PLEG): container finished" podID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerID="cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6" exitCode=0 Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.349826 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7f4d2" event={"ID":"6dde3c6b-a670-47da-88e0-26eae37ebf9d","Type":"ContainerDied","Data":"cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6"} Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.349848 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7f4d2" event={"ID":"6dde3c6b-a670-47da-88e0-26eae37ebf9d","Type":"ContainerDied","Data":"46b1c2b154944ba6b618e8f323555161cfd39ed7e86cc0bce3696838f238d38b"} Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.349868 4750 scope.go:117] "RemoveContainer" containerID="cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.349992 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7f4d2" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.370988 4750 scope.go:117] "RemoveContainer" containerID="3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.398378 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7f4d2"] Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.403184 4750 scope.go:117] "RemoveContainer" containerID="4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.411282 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7f4d2"] Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.457263 4750 scope.go:117] "RemoveContainer" containerID="cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6" Feb 14 14:45:46 crc kubenswrapper[4750]: E0214 14:45:46.457564 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6\": container with ID starting with cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6 not found: ID does not exist" containerID="cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.457592 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6"} err="failed to get container status \"cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6\": rpc error: code = NotFound desc = could not find container \"cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6\": container with ID starting with cd11ea6a3939d9c995f03bb6e572ed6a2c6b46041e85461638912c954260a4e6 not found: ID does not exist" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.457613 4750 scope.go:117] "RemoveContainer" containerID="3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e" Feb 14 14:45:46 crc kubenswrapper[4750]: E0214 14:45:46.457947 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e\": container with ID starting with 3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e not found: ID does not exist" containerID="3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.458002 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e"} err="failed to get container status \"3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e\": rpc error: code = NotFound desc = could not find container \"3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e\": container with ID starting with 3ad667adcdc7fbe18927faed46ba7f48826a80301a922c65bf1b7617dadd098e not found: ID does not exist" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.458017 4750 scope.go:117] "RemoveContainer" containerID="4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1" Feb 14 14:45:46 crc kubenswrapper[4750]: E0214 14:45:46.458420 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1\": container with ID starting with 4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1 not found: ID does not exist" containerID="4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.458445 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1"} err="failed to get container status \"4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1\": rpc error: code = NotFound desc = could not find container \"4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1\": container with ID starting with 4635bf302dc911e95ad6e9b1bc79b598f70ea088b1bace3239389e6db66bcaf1 not found: ID does not exist" Feb 14 14:45:46 crc kubenswrapper[4750]: I0214 14:45:46.767584 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" path="/var/lib/kubelet/pods/6dde3c6b-a670-47da-88e0-26eae37ebf9d/volumes" Feb 14 14:45:54 crc kubenswrapper[4750]: I0214 14:45:54.742784 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:45:54 crc kubenswrapper[4750]: E0214 14:45:54.743879 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:46:05 crc kubenswrapper[4750]: I0214 14:46:05.742871 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:46:05 crc kubenswrapper[4750]: E0214 14:46:05.746509 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:46:16 crc kubenswrapper[4750]: I0214 14:46:16.742585 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:46:16 crc kubenswrapper[4750]: E0214 14:46:16.743846 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:46:29 crc kubenswrapper[4750]: I0214 14:46:29.743208 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:46:29 crc kubenswrapper[4750]: E0214 14:46:29.744943 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:46:41 crc kubenswrapper[4750]: I0214 14:46:41.742616 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:46:42 crc kubenswrapper[4750]: I0214 14:46:42.249761 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"ee8d47b0e27f157af8af026eecf8cf6589b4a5bce94cc41f3035e83577116662"} Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.076575 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jkmdm"] Feb 14 14:47:40 crc kubenswrapper[4750]: E0214 14:47:40.077600 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerName="registry-server" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.077619 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerName="registry-server" Feb 14 14:47:40 crc kubenswrapper[4750]: E0214 14:47:40.077662 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerName="extract-utilities" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.077670 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerName="extract-utilities" Feb 14 14:47:40 crc kubenswrapper[4750]: E0214 14:47:40.077705 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerName="extract-content" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.077713 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerName="extract-content" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.077974 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dde3c6b-a670-47da-88e0-26eae37ebf9d" containerName="registry-server" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.080013 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.100676 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jkmdm"] Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.135647 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-catalog-content\") pod \"redhat-operators-jkmdm\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.135743 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxpvz\" (UniqueName: \"kubernetes.io/projected/9e8d90f5-0b1e-4baa-a18f-63a932b07254-kube-api-access-hxpvz\") pod \"redhat-operators-jkmdm\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.136568 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-utilities\") pod \"redhat-operators-jkmdm\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.239969 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-utilities\") pod \"redhat-operators-jkmdm\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.240213 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-catalog-content\") pod \"redhat-operators-jkmdm\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.240261 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxpvz\" (UniqueName: \"kubernetes.io/projected/9e8d90f5-0b1e-4baa-a18f-63a932b07254-kube-api-access-hxpvz\") pod \"redhat-operators-jkmdm\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.240718 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-catalog-content\") pod \"redhat-operators-jkmdm\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.240827 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-utilities\") pod \"redhat-operators-jkmdm\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.260337 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxpvz\" (UniqueName: \"kubernetes.io/projected/9e8d90f5-0b1e-4baa-a18f-63a932b07254-kube-api-access-hxpvz\") pod \"redhat-operators-jkmdm\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.426714 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:40 crc kubenswrapper[4750]: I0214 14:47:40.968591 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jkmdm"] Feb 14 14:47:41 crc kubenswrapper[4750]: I0214 14:47:41.048746 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkmdm" event={"ID":"9e8d90f5-0b1e-4baa-a18f-63a932b07254","Type":"ContainerStarted","Data":"f996e4bd1e85804b82587a4ff829064938f0128707eccfc51b580e6770f5f330"} Feb 14 14:47:42 crc kubenswrapper[4750]: I0214 14:47:42.063139 4750 generic.go:334] "Generic (PLEG): container finished" podID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerID="a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be" exitCode=0 Feb 14 14:47:42 crc kubenswrapper[4750]: I0214 14:47:42.063245 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkmdm" event={"ID":"9e8d90f5-0b1e-4baa-a18f-63a932b07254","Type":"ContainerDied","Data":"a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be"} Feb 14 14:47:43 crc kubenswrapper[4750]: I0214 14:47:43.075224 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkmdm" event={"ID":"9e8d90f5-0b1e-4baa-a18f-63a932b07254","Type":"ContainerStarted","Data":"a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3"} Feb 14 14:47:47 crc kubenswrapper[4750]: I0214 14:47:47.118515 4750 generic.go:334] "Generic (PLEG): container finished" podID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerID="a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3" exitCode=0 Feb 14 14:47:47 crc kubenswrapper[4750]: I0214 14:47:47.118605 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkmdm" event={"ID":"9e8d90f5-0b1e-4baa-a18f-63a932b07254","Type":"ContainerDied","Data":"a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3"} Feb 14 14:47:48 crc kubenswrapper[4750]: I0214 14:47:48.142999 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkmdm" event={"ID":"9e8d90f5-0b1e-4baa-a18f-63a932b07254","Type":"ContainerStarted","Data":"e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640"} Feb 14 14:47:48 crc kubenswrapper[4750]: I0214 14:47:48.185058 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jkmdm" podStartSLOduration=2.6803756180000002 podStartE2EDuration="8.185038008s" podCreationTimestamp="2026-02-14 14:47:40 +0000 UTC" firstStartedPulling="2026-02-14 14:47:42.066633989 +0000 UTC m=+3334.092623480" lastFinishedPulling="2026-02-14 14:47:47.571296389 +0000 UTC m=+3339.597285870" observedRunningTime="2026-02-14 14:47:48.171054269 +0000 UTC m=+3340.197043750" watchObservedRunningTime="2026-02-14 14:47:48.185038008 +0000 UTC m=+3340.211027509" Feb 14 14:47:50 crc kubenswrapper[4750]: I0214 14:47:50.427455 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:50 crc kubenswrapper[4750]: I0214 14:47:50.428105 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:47:51 crc kubenswrapper[4750]: I0214 14:47:51.480032 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jkmdm" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="registry-server" probeResult="failure" output=< Feb 14 14:47:51 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:47:51 crc kubenswrapper[4750]: > Feb 14 14:48:01 crc kubenswrapper[4750]: I0214 14:48:01.484104 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jkmdm" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="registry-server" probeResult="failure" output=< Feb 14 14:48:01 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:48:01 crc kubenswrapper[4750]: > Feb 14 14:48:11 crc kubenswrapper[4750]: I0214 14:48:11.495273 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jkmdm" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="registry-server" probeResult="failure" output=< Feb 14 14:48:11 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:48:11 crc kubenswrapper[4750]: > Feb 14 14:48:20 crc kubenswrapper[4750]: I0214 14:48:20.477080 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:48:20 crc kubenswrapper[4750]: I0214 14:48:20.532207 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:48:20 crc kubenswrapper[4750]: I0214 14:48:20.717594 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jkmdm"] Feb 14 14:48:21 crc kubenswrapper[4750]: I0214 14:48:21.508178 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jkmdm" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="registry-server" containerID="cri-o://e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640" gracePeriod=2 Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.103679 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.129007 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-catalog-content\") pod \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.129069 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxpvz\" (UniqueName: \"kubernetes.io/projected/9e8d90f5-0b1e-4baa-a18f-63a932b07254-kube-api-access-hxpvz\") pod \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.129463 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-utilities\") pod \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\" (UID: \"9e8d90f5-0b1e-4baa-a18f-63a932b07254\") " Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.129996 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-utilities" (OuterVolumeSpecName: "utilities") pod "9e8d90f5-0b1e-4baa-a18f-63a932b07254" (UID: "9e8d90f5-0b1e-4baa-a18f-63a932b07254"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.130510 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.136407 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8d90f5-0b1e-4baa-a18f-63a932b07254-kube-api-access-hxpvz" (OuterVolumeSpecName: "kube-api-access-hxpvz") pod "9e8d90f5-0b1e-4baa-a18f-63a932b07254" (UID: "9e8d90f5-0b1e-4baa-a18f-63a932b07254"). InnerVolumeSpecName "kube-api-access-hxpvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.233540 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxpvz\" (UniqueName: \"kubernetes.io/projected/9e8d90f5-0b1e-4baa-a18f-63a932b07254-kube-api-access-hxpvz\") on node \"crc\" DevicePath \"\"" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.248323 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e8d90f5-0b1e-4baa-a18f-63a932b07254" (UID: "9e8d90f5-0b1e-4baa-a18f-63a932b07254"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.335838 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e8d90f5-0b1e-4baa-a18f-63a932b07254-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.526153 4750 generic.go:334] "Generic (PLEG): container finished" podID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerID="e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640" exitCode=0 Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.526206 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkmdm" event={"ID":"9e8d90f5-0b1e-4baa-a18f-63a932b07254","Type":"ContainerDied","Data":"e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640"} Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.526240 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkmdm" event={"ID":"9e8d90f5-0b1e-4baa-a18f-63a932b07254","Type":"ContainerDied","Data":"f996e4bd1e85804b82587a4ff829064938f0128707eccfc51b580e6770f5f330"} Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.526263 4750 scope.go:117] "RemoveContainer" containerID="e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.526265 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkmdm" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.585140 4750 scope.go:117] "RemoveContainer" containerID="a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.606120 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jkmdm"] Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.622916 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jkmdm"] Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.630526 4750 scope.go:117] "RemoveContainer" containerID="a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.680272 4750 scope.go:117] "RemoveContainer" containerID="e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640" Feb 14 14:48:22 crc kubenswrapper[4750]: E0214 14:48:22.680728 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640\": container with ID starting with e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640 not found: ID does not exist" containerID="e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.680772 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640"} err="failed to get container status \"e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640\": rpc error: code = NotFound desc = could not find container \"e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640\": container with ID starting with e56ff1053022f46ef44c54b4a84cdbbb4626d4b4b9deacb4c882507436283640 not found: ID does not exist" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.680797 4750 scope.go:117] "RemoveContainer" containerID="a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3" Feb 14 14:48:22 crc kubenswrapper[4750]: E0214 14:48:22.681360 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3\": container with ID starting with a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3 not found: ID does not exist" containerID="a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.681404 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3"} err="failed to get container status \"a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3\": rpc error: code = NotFound desc = could not find container \"a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3\": container with ID starting with a20964b92e9385e71c74bc8901f3b0b86497a382829a42789bf22f7a916930f3 not found: ID does not exist" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.681425 4750 scope.go:117] "RemoveContainer" containerID="a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be" Feb 14 14:48:22 crc kubenswrapper[4750]: E0214 14:48:22.683775 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be\": container with ID starting with a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be not found: ID does not exist" containerID="a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.683807 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be"} err="failed to get container status \"a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be\": rpc error: code = NotFound desc = could not find container \"a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be\": container with ID starting with a4c405a330309c961d4b8752741308848d646bab748a4b142a05005689fdd4be not found: ID does not exist" Feb 14 14:48:22 crc kubenswrapper[4750]: I0214 14:48:22.754760 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" path="/var/lib/kubelet/pods/9e8d90f5-0b1e-4baa-a18f-63a932b07254/volumes" Feb 14 14:49:00 crc kubenswrapper[4750]: I0214 14:49:00.129515 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:49:00 crc kubenswrapper[4750]: I0214 14:49:00.130057 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:49:30 crc kubenswrapper[4750]: I0214 14:49:30.129278 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:49:30 crc kubenswrapper[4750]: I0214 14:49:30.129988 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.475473 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5mpcb"] Feb 14 14:49:33 crc kubenswrapper[4750]: E0214 14:49:33.477643 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="extract-utilities" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.477682 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="extract-utilities" Feb 14 14:49:33 crc kubenswrapper[4750]: E0214 14:49:33.477780 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="registry-server" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.477799 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="registry-server" Feb 14 14:49:33 crc kubenswrapper[4750]: E0214 14:49:33.477835 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="extract-content" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.477854 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="extract-content" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.478469 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8d90f5-0b1e-4baa-a18f-63a932b07254" containerName="registry-server" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.483050 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.492393 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mpcb"] Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.506813 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-catalog-content\") pod \"community-operators-5mpcb\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.506911 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-utilities\") pod \"community-operators-5mpcb\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.506970 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hks7b\" (UniqueName: \"kubernetes.io/projected/0569aaab-089b-43ac-9543-6cf0c064f752-kube-api-access-hks7b\") pod \"community-operators-5mpcb\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.608991 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-catalog-content\") pod \"community-operators-5mpcb\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.609062 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-utilities\") pod \"community-operators-5mpcb\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.609121 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hks7b\" (UniqueName: \"kubernetes.io/projected/0569aaab-089b-43ac-9543-6cf0c064f752-kube-api-access-hks7b\") pod \"community-operators-5mpcb\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.609742 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-utilities\") pod \"community-operators-5mpcb\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.609741 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-catalog-content\") pod \"community-operators-5mpcb\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.631093 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hks7b\" (UniqueName: \"kubernetes.io/projected/0569aaab-089b-43ac-9543-6cf0c064f752-kube-api-access-hks7b\") pod \"community-operators-5mpcb\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:33 crc kubenswrapper[4750]: I0214 14:49:33.812612 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:34 crc kubenswrapper[4750]: I0214 14:49:34.345080 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mpcb"] Feb 14 14:49:34 crc kubenswrapper[4750]: I0214 14:49:34.503944 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mpcb" event={"ID":"0569aaab-089b-43ac-9543-6cf0c064f752","Type":"ContainerStarted","Data":"1988a8ad4b736eca0ded76d6eceeeb67b81c9149048ec9329f77e67b4c631863"} Feb 14 14:49:35 crc kubenswrapper[4750]: I0214 14:49:35.527896 4750 generic.go:334] "Generic (PLEG): container finished" podID="0569aaab-089b-43ac-9543-6cf0c064f752" containerID="2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd" exitCode=0 Feb 14 14:49:35 crc kubenswrapper[4750]: I0214 14:49:35.528074 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mpcb" event={"ID":"0569aaab-089b-43ac-9543-6cf0c064f752","Type":"ContainerDied","Data":"2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd"} Feb 14 14:49:36 crc kubenswrapper[4750]: I0214 14:49:36.543914 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mpcb" event={"ID":"0569aaab-089b-43ac-9543-6cf0c064f752","Type":"ContainerStarted","Data":"efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff"} Feb 14 14:49:37 crc kubenswrapper[4750]: I0214 14:49:37.556041 4750 generic.go:334] "Generic (PLEG): container finished" podID="0569aaab-089b-43ac-9543-6cf0c064f752" containerID="efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff" exitCode=0 Feb 14 14:49:37 crc kubenswrapper[4750]: I0214 14:49:37.556356 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mpcb" event={"ID":"0569aaab-089b-43ac-9543-6cf0c064f752","Type":"ContainerDied","Data":"efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff"} Feb 14 14:49:38 crc kubenswrapper[4750]: I0214 14:49:38.573912 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mpcb" event={"ID":"0569aaab-089b-43ac-9543-6cf0c064f752","Type":"ContainerStarted","Data":"f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43"} Feb 14 14:49:38 crc kubenswrapper[4750]: I0214 14:49:38.598569 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5mpcb" podStartSLOduration=3.19690392 podStartE2EDuration="5.596768611s" podCreationTimestamp="2026-02-14 14:49:33 +0000 UTC" firstStartedPulling="2026-02-14 14:49:35.53330416 +0000 UTC m=+3447.559293681" lastFinishedPulling="2026-02-14 14:49:37.933168891 +0000 UTC m=+3449.959158372" observedRunningTime="2026-02-14 14:49:38.592124148 +0000 UTC m=+3450.618113629" watchObservedRunningTime="2026-02-14 14:49:38.596768611 +0000 UTC m=+3450.622758102" Feb 14 14:49:43 crc kubenswrapper[4750]: I0214 14:49:43.813299 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:43 crc kubenswrapper[4750]: I0214 14:49:43.813932 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:43 crc kubenswrapper[4750]: I0214 14:49:43.881538 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:44 crc kubenswrapper[4750]: I0214 14:49:44.693027 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:44 crc kubenswrapper[4750]: I0214 14:49:44.756623 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5mpcb"] Feb 14 14:49:46 crc kubenswrapper[4750]: I0214 14:49:46.672040 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5mpcb" podUID="0569aaab-089b-43ac-9543-6cf0c064f752" containerName="registry-server" containerID="cri-o://f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43" gracePeriod=2 Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.217210 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.336291 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hks7b\" (UniqueName: \"kubernetes.io/projected/0569aaab-089b-43ac-9543-6cf0c064f752-kube-api-access-hks7b\") pod \"0569aaab-089b-43ac-9543-6cf0c064f752\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.336471 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-utilities\") pod \"0569aaab-089b-43ac-9543-6cf0c064f752\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.336529 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-catalog-content\") pod \"0569aaab-089b-43ac-9543-6cf0c064f752\" (UID: \"0569aaab-089b-43ac-9543-6cf0c064f752\") " Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.337510 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-utilities" (OuterVolumeSpecName: "utilities") pod "0569aaab-089b-43ac-9543-6cf0c064f752" (UID: "0569aaab-089b-43ac-9543-6cf0c064f752"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.342377 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0569aaab-089b-43ac-9543-6cf0c064f752-kube-api-access-hks7b" (OuterVolumeSpecName: "kube-api-access-hks7b") pod "0569aaab-089b-43ac-9543-6cf0c064f752" (UID: "0569aaab-089b-43ac-9543-6cf0c064f752"). InnerVolumeSpecName "kube-api-access-hks7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.387841 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0569aaab-089b-43ac-9543-6cf0c064f752" (UID: "0569aaab-089b-43ac-9543-6cf0c064f752"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.439471 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.439712 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0569aaab-089b-43ac-9543-6cf0c064f752-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.439724 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hks7b\" (UniqueName: \"kubernetes.io/projected/0569aaab-089b-43ac-9543-6cf0c064f752-kube-api-access-hks7b\") on node \"crc\" DevicePath \"\"" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.685364 4750 generic.go:334] "Generic (PLEG): container finished" podID="0569aaab-089b-43ac-9543-6cf0c064f752" containerID="f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43" exitCode=0 Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.685409 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mpcb" event={"ID":"0569aaab-089b-43ac-9543-6cf0c064f752","Type":"ContainerDied","Data":"f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43"} Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.685437 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mpcb" event={"ID":"0569aaab-089b-43ac-9543-6cf0c064f752","Type":"ContainerDied","Data":"1988a8ad4b736eca0ded76d6eceeeb67b81c9149048ec9329f77e67b4c631863"} Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.685453 4750 scope.go:117] "RemoveContainer" containerID="f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.685449 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mpcb" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.712205 4750 scope.go:117] "RemoveContainer" containerID="efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.740054 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5mpcb"] Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.754645 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5mpcb"] Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.761231 4750 scope.go:117] "RemoveContainer" containerID="2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.806818 4750 scope.go:117] "RemoveContainer" containerID="f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43" Feb 14 14:49:47 crc kubenswrapper[4750]: E0214 14:49:47.807544 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43\": container with ID starting with f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43 not found: ID does not exist" containerID="f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.807598 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43"} err="failed to get container status \"f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43\": rpc error: code = NotFound desc = could not find container \"f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43\": container with ID starting with f7b11fee40de4bad91e360f1abd03bc3778a183136db30dfb97e2e503de19f43 not found: ID does not exist" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.807630 4750 scope.go:117] "RemoveContainer" containerID="efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff" Feb 14 14:49:47 crc kubenswrapper[4750]: E0214 14:49:47.807951 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff\": container with ID starting with efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff not found: ID does not exist" containerID="efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.807993 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff"} err="failed to get container status \"efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff\": rpc error: code = NotFound desc = could not find container \"efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff\": container with ID starting with efff786fb9c1bdaf5a0f0e1a34bd85a4f54528b02ae3eb36a96f00fca0fcbfff not found: ID does not exist" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.808020 4750 scope.go:117] "RemoveContainer" containerID="2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd" Feb 14 14:49:47 crc kubenswrapper[4750]: E0214 14:49:47.808339 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd\": container with ID starting with 2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd not found: ID does not exist" containerID="2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd" Feb 14 14:49:47 crc kubenswrapper[4750]: I0214 14:49:47.808376 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd"} err="failed to get container status \"2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd\": rpc error: code = NotFound desc = could not find container \"2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd\": container with ID starting with 2531da1696f671fac7e520e54683bda1997b25d64ff237e6f09890f1e48b25fd not found: ID does not exist" Feb 14 14:49:48 crc kubenswrapper[4750]: I0214 14:49:48.760487 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0569aaab-089b-43ac-9543-6cf0c064f752" path="/var/lib/kubelet/pods/0569aaab-089b-43ac-9543-6cf0c064f752/volumes" Feb 14 14:50:00 crc kubenswrapper[4750]: I0214 14:50:00.129678 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:50:00 crc kubenswrapper[4750]: I0214 14:50:00.130398 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:50:00 crc kubenswrapper[4750]: I0214 14:50:00.130450 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:50:00 crc kubenswrapper[4750]: I0214 14:50:00.131515 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee8d47b0e27f157af8af026eecf8cf6589b4a5bce94cc41f3035e83577116662"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:50:00 crc kubenswrapper[4750]: I0214 14:50:00.131605 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://ee8d47b0e27f157af8af026eecf8cf6589b4a5bce94cc41f3035e83577116662" gracePeriod=600 Feb 14 14:50:00 crc kubenswrapper[4750]: I0214 14:50:00.867704 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="ee8d47b0e27f157af8af026eecf8cf6589b4a5bce94cc41f3035e83577116662" exitCode=0 Feb 14 14:50:00 crc kubenswrapper[4750]: I0214 14:50:00.867792 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"ee8d47b0e27f157af8af026eecf8cf6589b4a5bce94cc41f3035e83577116662"} Feb 14 14:50:00 crc kubenswrapper[4750]: I0214 14:50:00.868611 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b"} Feb 14 14:50:00 crc kubenswrapper[4750]: I0214 14:50:00.868660 4750 scope.go:117] "RemoveContainer" containerID="a352d30e3da4ebf4c02b3951ce68111aedc33551e829fd747a359665eefeae3e" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.068027 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k5t8k"] Feb 14 14:50:50 crc kubenswrapper[4750]: E0214 14:50:50.069227 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0569aaab-089b-43ac-9543-6cf0c064f752" containerName="registry-server" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.069248 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="0569aaab-089b-43ac-9543-6cf0c064f752" containerName="registry-server" Feb 14 14:50:50 crc kubenswrapper[4750]: E0214 14:50:50.069289 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0569aaab-089b-43ac-9543-6cf0c064f752" containerName="extract-content" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.069299 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="0569aaab-089b-43ac-9543-6cf0c064f752" containerName="extract-content" Feb 14 14:50:50 crc kubenswrapper[4750]: E0214 14:50:50.069324 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0569aaab-089b-43ac-9543-6cf0c064f752" containerName="extract-utilities" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.069332 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="0569aaab-089b-43ac-9543-6cf0c064f752" containerName="extract-utilities" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.069603 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="0569aaab-089b-43ac-9543-6cf0c064f752" containerName="registry-server" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.071711 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.088387 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5t8k"] Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.109224 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-utilities\") pod \"redhat-marketplace-k5t8k\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.109554 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxtq\" (UniqueName: \"kubernetes.io/projected/a8b4a111-cd4b-44cc-a617-40c357aa6536-kube-api-access-gjxtq\") pod \"redhat-marketplace-k5t8k\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.109574 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-catalog-content\") pod \"redhat-marketplace-k5t8k\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.212282 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-utilities\") pod \"redhat-marketplace-k5t8k\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.212454 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxtq\" (UniqueName: \"kubernetes.io/projected/a8b4a111-cd4b-44cc-a617-40c357aa6536-kube-api-access-gjxtq\") pod \"redhat-marketplace-k5t8k\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.212482 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-catalog-content\") pod \"redhat-marketplace-k5t8k\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.212816 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-utilities\") pod \"redhat-marketplace-k5t8k\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.212990 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-catalog-content\") pod \"redhat-marketplace-k5t8k\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.241152 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxtq\" (UniqueName: \"kubernetes.io/projected/a8b4a111-cd4b-44cc-a617-40c357aa6536-kube-api-access-gjxtq\") pod \"redhat-marketplace-k5t8k\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.407419 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:50:50 crc kubenswrapper[4750]: I0214 14:50:50.889586 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5t8k"] Feb 14 14:50:50 crc kubenswrapper[4750]: W0214 14:50:50.893290 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8b4a111_cd4b_44cc_a617_40c357aa6536.slice/crio-708c57d9331642083012ca8860a226fe12cba092a9e5a07bc9236bceb2aa954c WatchSource:0}: Error finding container 708c57d9331642083012ca8860a226fe12cba092a9e5a07bc9236bceb2aa954c: Status 404 returned error can't find the container with id 708c57d9331642083012ca8860a226fe12cba092a9e5a07bc9236bceb2aa954c Feb 14 14:50:51 crc kubenswrapper[4750]: I0214 14:50:51.481595 4750 generic.go:334] "Generic (PLEG): container finished" podID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerID="9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888" exitCode=0 Feb 14 14:50:51 crc kubenswrapper[4750]: I0214 14:50:51.481692 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5t8k" event={"ID":"a8b4a111-cd4b-44cc-a617-40c357aa6536","Type":"ContainerDied","Data":"9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888"} Feb 14 14:50:51 crc kubenswrapper[4750]: I0214 14:50:51.482816 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5t8k" event={"ID":"a8b4a111-cd4b-44cc-a617-40c357aa6536","Type":"ContainerStarted","Data":"708c57d9331642083012ca8860a226fe12cba092a9e5a07bc9236bceb2aa954c"} Feb 14 14:50:51 crc kubenswrapper[4750]: I0214 14:50:51.485939 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:50:52 crc kubenswrapper[4750]: I0214 14:50:52.493853 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5t8k" event={"ID":"a8b4a111-cd4b-44cc-a617-40c357aa6536","Type":"ContainerStarted","Data":"4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775"} Feb 14 14:50:53 crc kubenswrapper[4750]: I0214 14:50:53.510201 4750 generic.go:334] "Generic (PLEG): container finished" podID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerID="4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775" exitCode=0 Feb 14 14:50:53 crc kubenswrapper[4750]: I0214 14:50:53.510294 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5t8k" event={"ID":"a8b4a111-cd4b-44cc-a617-40c357aa6536","Type":"ContainerDied","Data":"4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775"} Feb 14 14:50:54 crc kubenswrapper[4750]: I0214 14:50:54.527900 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5t8k" event={"ID":"a8b4a111-cd4b-44cc-a617-40c357aa6536","Type":"ContainerStarted","Data":"56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b"} Feb 14 14:50:54 crc kubenswrapper[4750]: I0214 14:50:54.552417 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k5t8k" podStartSLOduration=2.049042262 podStartE2EDuration="4.552394725s" podCreationTimestamp="2026-02-14 14:50:50 +0000 UTC" firstStartedPulling="2026-02-14 14:50:51.485665431 +0000 UTC m=+3523.511654912" lastFinishedPulling="2026-02-14 14:50:53.989017894 +0000 UTC m=+3526.015007375" observedRunningTime="2026-02-14 14:50:54.546697113 +0000 UTC m=+3526.572686614" watchObservedRunningTime="2026-02-14 14:50:54.552394725 +0000 UTC m=+3526.578384226" Feb 14 14:50:54 crc kubenswrapper[4750]: E0214 14:50:54.773383 4750 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:51372->38.102.83.36:35453: write tcp 38.102.83.36:51372->38.102.83.36:35453: write: broken pipe Feb 14 14:51:00 crc kubenswrapper[4750]: I0214 14:51:00.408474 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:51:00 crc kubenswrapper[4750]: I0214 14:51:00.409327 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:51:00 crc kubenswrapper[4750]: I0214 14:51:00.484766 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:51:00 crc kubenswrapper[4750]: I0214 14:51:00.636670 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:51:00 crc kubenswrapper[4750]: I0214 14:51:00.724953 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5t8k"] Feb 14 14:51:02 crc kubenswrapper[4750]: I0214 14:51:02.609433 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k5t8k" podUID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerName="registry-server" containerID="cri-o://56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b" gracePeriod=2 Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.180199 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.261093 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-catalog-content\") pod \"a8b4a111-cd4b-44cc-a617-40c357aa6536\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.261731 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-utilities\") pod \"a8b4a111-cd4b-44cc-a617-40c357aa6536\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.261916 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjxtq\" (UniqueName: \"kubernetes.io/projected/a8b4a111-cd4b-44cc-a617-40c357aa6536-kube-api-access-gjxtq\") pod \"a8b4a111-cd4b-44cc-a617-40c357aa6536\" (UID: \"a8b4a111-cd4b-44cc-a617-40c357aa6536\") " Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.262606 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-utilities" (OuterVolumeSpecName: "utilities") pod "a8b4a111-cd4b-44cc-a617-40c357aa6536" (UID: "a8b4a111-cd4b-44cc-a617-40c357aa6536"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.280278 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b4a111-cd4b-44cc-a617-40c357aa6536-kube-api-access-gjxtq" (OuterVolumeSpecName: "kube-api-access-gjxtq") pod "a8b4a111-cd4b-44cc-a617-40c357aa6536" (UID: "a8b4a111-cd4b-44cc-a617-40c357aa6536"). InnerVolumeSpecName "kube-api-access-gjxtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.296553 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8b4a111-cd4b-44cc-a617-40c357aa6536" (UID: "a8b4a111-cd4b-44cc-a617-40c357aa6536"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.364045 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.364076 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b4a111-cd4b-44cc-a617-40c357aa6536-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.364086 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjxtq\" (UniqueName: \"kubernetes.io/projected/a8b4a111-cd4b-44cc-a617-40c357aa6536-kube-api-access-gjxtq\") on node \"crc\" DevicePath \"\"" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.637803 4750 generic.go:334] "Generic (PLEG): container finished" podID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerID="56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b" exitCode=0 Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.637846 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5t8k" event={"ID":"a8b4a111-cd4b-44cc-a617-40c357aa6536","Type":"ContainerDied","Data":"56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b"} Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.637870 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5t8k" event={"ID":"a8b4a111-cd4b-44cc-a617-40c357aa6536","Type":"ContainerDied","Data":"708c57d9331642083012ca8860a226fe12cba092a9e5a07bc9236bceb2aa954c"} Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.637887 4750 scope.go:117] "RemoveContainer" containerID="56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.638012 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5t8k" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.659295 4750 scope.go:117] "RemoveContainer" containerID="4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.678560 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5t8k"] Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.694206 4750 scope.go:117] "RemoveContainer" containerID="9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.701960 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5t8k"] Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.744582 4750 scope.go:117] "RemoveContainer" containerID="56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b" Feb 14 14:51:03 crc kubenswrapper[4750]: E0214 14:51:03.745083 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b\": container with ID starting with 56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b not found: ID does not exist" containerID="56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.745141 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b"} err="failed to get container status \"56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b\": rpc error: code = NotFound desc = could not find container \"56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b\": container with ID starting with 56506b468f1c102520a5dd65055701e7c31fa7b528592f359d1143b262c6ea4b not found: ID does not exist" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.745169 4750 scope.go:117] "RemoveContainer" containerID="4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775" Feb 14 14:51:03 crc kubenswrapper[4750]: E0214 14:51:03.746161 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775\": container with ID starting with 4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775 not found: ID does not exist" containerID="4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.746208 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775"} err="failed to get container status \"4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775\": rpc error: code = NotFound desc = could not find container \"4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775\": container with ID starting with 4c36dc104241a1466d3eab4b046be0922120d7674e2db2797bc07ab86de7a775 not found: ID does not exist" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.746237 4750 scope.go:117] "RemoveContainer" containerID="9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888" Feb 14 14:51:03 crc kubenswrapper[4750]: E0214 14:51:03.746566 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888\": container with ID starting with 9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888 not found: ID does not exist" containerID="9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888" Feb 14 14:51:03 crc kubenswrapper[4750]: I0214 14:51:03.746601 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888"} err="failed to get container status \"9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888\": rpc error: code = NotFound desc = could not find container \"9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888\": container with ID starting with 9ec15c7cf6b46201be3f5a037d20c2c7c840c47c320845106db4be26d18b8888 not found: ID does not exist" Feb 14 14:51:04 crc kubenswrapper[4750]: I0214 14:51:04.762781 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b4a111-cd4b-44cc-a617-40c357aa6536" path="/var/lib/kubelet/pods/a8b4a111-cd4b-44cc-a617-40c357aa6536/volumes" Feb 14 14:52:00 crc kubenswrapper[4750]: I0214 14:52:00.129375 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:52:00 crc kubenswrapper[4750]: I0214 14:52:00.129925 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:52:30 crc kubenswrapper[4750]: I0214 14:52:30.128748 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:52:30 crc kubenswrapper[4750]: I0214 14:52:30.129453 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:53:00 crc kubenswrapper[4750]: I0214 14:53:00.128953 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 14:53:00 crc kubenswrapper[4750]: I0214 14:53:00.129877 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 14:53:00 crc kubenswrapper[4750]: I0214 14:53:00.129966 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 14:53:00 crc kubenswrapper[4750]: I0214 14:53:00.131648 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 14:53:00 crc kubenswrapper[4750]: I0214 14:53:00.131780 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" gracePeriod=600 Feb 14 14:53:00 crc kubenswrapper[4750]: E0214 14:53:00.252718 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:53:01 crc kubenswrapper[4750]: I0214 14:53:01.069697 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" exitCode=0 Feb 14 14:53:01 crc kubenswrapper[4750]: I0214 14:53:01.069762 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b"} Feb 14 14:53:01 crc kubenswrapper[4750]: I0214 14:53:01.069911 4750 scope.go:117] "RemoveContainer" containerID="ee8d47b0e27f157af8af026eecf8cf6589b4a5bce94cc41f3035e83577116662" Feb 14 14:53:01 crc kubenswrapper[4750]: I0214 14:53:01.071478 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:53:01 crc kubenswrapper[4750]: E0214 14:53:01.072423 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:53:14 crc kubenswrapper[4750]: I0214 14:53:14.742788 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:53:14 crc kubenswrapper[4750]: E0214 14:53:14.744006 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:53:28 crc kubenswrapper[4750]: I0214 14:53:28.752571 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:53:28 crc kubenswrapper[4750]: E0214 14:53:28.754483 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:53:41 crc kubenswrapper[4750]: I0214 14:53:41.743525 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:53:41 crc kubenswrapper[4750]: E0214 14:53:41.744476 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:53:53 crc kubenswrapper[4750]: I0214 14:53:53.743353 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:53:53 crc kubenswrapper[4750]: E0214 14:53:53.745276 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:54:05 crc kubenswrapper[4750]: I0214 14:54:05.741629 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:54:05 crc kubenswrapper[4750]: E0214 14:54:05.742400 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:54:17 crc kubenswrapper[4750]: I0214 14:54:17.741898 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:54:17 crc kubenswrapper[4750]: E0214 14:54:17.742651 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:54:32 crc kubenswrapper[4750]: I0214 14:54:32.741603 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:54:32 crc kubenswrapper[4750]: E0214 14:54:32.742600 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:54:44 crc kubenswrapper[4750]: I0214 14:54:44.742221 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:54:44 crc kubenswrapper[4750]: E0214 14:54:44.743215 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:54:49 crc kubenswrapper[4750]: E0214 14:54:49.248318 4750 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:40676->38.102.83.36:35453: write tcp 38.102.83.36:40676->38.102.83.36:35453: write: broken pipe Feb 14 14:54:57 crc kubenswrapper[4750]: I0214 14:54:57.742517 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:54:57 crc kubenswrapper[4750]: E0214 14:54:57.743724 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:55:08 crc kubenswrapper[4750]: I0214 14:55:08.758980 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:55:08 crc kubenswrapper[4750]: E0214 14:55:08.762805 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:55:18 crc kubenswrapper[4750]: E0214 14:55:18.383688 4750 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:38148->38.102.83.36:35453: write tcp 38.102.83.36:38148->38.102.83.36:35453: write: connection reset by peer Feb 14 14:55:24 crc kubenswrapper[4750]: I0214 14:55:24.030013 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:55:24 crc kubenswrapper[4750]: E0214 14:55:24.032588 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:55:35 crc kubenswrapper[4750]: I0214 14:55:35.743004 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:55:35 crc kubenswrapper[4750]: E0214 14:55:35.744076 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.381647 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lqgr7"] Feb 14 14:55:46 crc kubenswrapper[4750]: E0214 14:55:46.383678 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerName="extract-content" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.383795 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerName="extract-content" Feb 14 14:55:46 crc kubenswrapper[4750]: E0214 14:55:46.383904 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerName="registry-server" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.383989 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerName="registry-server" Feb 14 14:55:46 crc kubenswrapper[4750]: E0214 14:55:46.384085 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerName="extract-utilities" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.384189 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerName="extract-utilities" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.384579 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b4a111-cd4b-44cc-a617-40c357aa6536" containerName="registry-server" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.386797 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.395061 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqgr7"] Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.518668 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-utilities\") pod \"certified-operators-lqgr7\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.518748 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6m6\" (UniqueName: \"kubernetes.io/projected/98370313-aea8-4d01-9575-2857ab32765b-kube-api-access-tr6m6\") pod \"certified-operators-lqgr7\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.520093 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-catalog-content\") pod \"certified-operators-lqgr7\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.622546 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-utilities\") pod \"certified-operators-lqgr7\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.622684 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr6m6\" (UniqueName: \"kubernetes.io/projected/98370313-aea8-4d01-9575-2857ab32765b-kube-api-access-tr6m6\") pod \"certified-operators-lqgr7\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.622743 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-catalog-content\") pod \"certified-operators-lqgr7\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.623132 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-catalog-content\") pod \"certified-operators-lqgr7\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.623132 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-utilities\") pod \"certified-operators-lqgr7\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.657679 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr6m6\" (UniqueName: \"kubernetes.io/projected/98370313-aea8-4d01-9575-2857ab32765b-kube-api-access-tr6m6\") pod \"certified-operators-lqgr7\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.720886 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:46 crc kubenswrapper[4750]: I0214 14:55:46.742051 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:55:46 crc kubenswrapper[4750]: E0214 14:55:46.742547 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:55:47 crc kubenswrapper[4750]: I0214 14:55:47.311765 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqgr7"] Feb 14 14:55:47 crc kubenswrapper[4750]: I0214 14:55:47.351146 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgr7" event={"ID":"98370313-aea8-4d01-9575-2857ab32765b","Type":"ContainerStarted","Data":"8ec8a67dc8a309b27beb1db98fa306f37d28c1eef98b0dafb5bc00e097c0f0a3"} Feb 14 14:55:48 crc kubenswrapper[4750]: I0214 14:55:48.365074 4750 generic.go:334] "Generic (PLEG): container finished" podID="98370313-aea8-4d01-9575-2857ab32765b" containerID="72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d" exitCode=0 Feb 14 14:55:48 crc kubenswrapper[4750]: I0214 14:55:48.365161 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgr7" event={"ID":"98370313-aea8-4d01-9575-2857ab32765b","Type":"ContainerDied","Data":"72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d"} Feb 14 14:55:49 crc kubenswrapper[4750]: I0214 14:55:49.379853 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgr7" event={"ID":"98370313-aea8-4d01-9575-2857ab32765b","Type":"ContainerStarted","Data":"12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f"} Feb 14 14:55:51 crc kubenswrapper[4750]: I0214 14:55:51.406758 4750 generic.go:334] "Generic (PLEG): container finished" podID="98370313-aea8-4d01-9575-2857ab32765b" containerID="12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f" exitCode=0 Feb 14 14:55:51 crc kubenswrapper[4750]: I0214 14:55:51.406862 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgr7" event={"ID":"98370313-aea8-4d01-9575-2857ab32765b","Type":"ContainerDied","Data":"12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f"} Feb 14 14:55:52 crc kubenswrapper[4750]: I0214 14:55:52.420101 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgr7" event={"ID":"98370313-aea8-4d01-9575-2857ab32765b","Type":"ContainerStarted","Data":"c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360"} Feb 14 14:55:52 crc kubenswrapper[4750]: I0214 14:55:52.439485 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lqgr7" podStartSLOduration=2.754680903 podStartE2EDuration="6.439464778s" podCreationTimestamp="2026-02-14 14:55:46 +0000 UTC" firstStartedPulling="2026-02-14 14:55:48.367218924 +0000 UTC m=+3820.393208405" lastFinishedPulling="2026-02-14 14:55:52.052002789 +0000 UTC m=+3824.077992280" observedRunningTime="2026-02-14 14:55:52.438881901 +0000 UTC m=+3824.464871392" watchObservedRunningTime="2026-02-14 14:55:52.439464778 +0000 UTC m=+3824.465454279" Feb 14 14:55:56 crc kubenswrapper[4750]: I0214 14:55:56.721825 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:56 crc kubenswrapper[4750]: I0214 14:55:56.722094 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:56 crc kubenswrapper[4750]: I0214 14:55:56.815155 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:57 crc kubenswrapper[4750]: I0214 14:55:57.527249 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:55:57 crc kubenswrapper[4750]: I0214 14:55:57.589339 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqgr7"] Feb 14 14:55:57 crc kubenswrapper[4750]: I0214 14:55:57.742247 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:55:57 crc kubenswrapper[4750]: E0214 14:55:57.742611 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:55:59 crc kubenswrapper[4750]: I0214 14:55:59.504808 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lqgr7" podUID="98370313-aea8-4d01-9575-2857ab32765b" containerName="registry-server" containerID="cri-o://c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360" gracePeriod=2 Feb 14 14:55:59 crc kubenswrapper[4750]: I0214 14:55:59.980953 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.079982 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-utilities\") pod \"98370313-aea8-4d01-9575-2857ab32765b\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.080052 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-catalog-content\") pod \"98370313-aea8-4d01-9575-2857ab32765b\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.080091 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr6m6\" (UniqueName: \"kubernetes.io/projected/98370313-aea8-4d01-9575-2857ab32765b-kube-api-access-tr6m6\") pod \"98370313-aea8-4d01-9575-2857ab32765b\" (UID: \"98370313-aea8-4d01-9575-2857ab32765b\") " Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.080841 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-utilities" (OuterVolumeSpecName: "utilities") pod "98370313-aea8-4d01-9575-2857ab32765b" (UID: "98370313-aea8-4d01-9575-2857ab32765b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.087971 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98370313-aea8-4d01-9575-2857ab32765b-kube-api-access-tr6m6" (OuterVolumeSpecName: "kube-api-access-tr6m6") pod "98370313-aea8-4d01-9575-2857ab32765b" (UID: "98370313-aea8-4d01-9575-2857ab32765b"). InnerVolumeSpecName "kube-api-access-tr6m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.131844 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98370313-aea8-4d01-9575-2857ab32765b" (UID: "98370313-aea8-4d01-9575-2857ab32765b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.182287 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.182570 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98370313-aea8-4d01-9575-2857ab32765b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.182584 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr6m6\" (UniqueName: \"kubernetes.io/projected/98370313-aea8-4d01-9575-2857ab32765b-kube-api-access-tr6m6\") on node \"crc\" DevicePath \"\"" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.524307 4750 generic.go:334] "Generic (PLEG): container finished" podID="98370313-aea8-4d01-9575-2857ab32765b" containerID="c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360" exitCode=0 Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.524361 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgr7" event={"ID":"98370313-aea8-4d01-9575-2857ab32765b","Type":"ContainerDied","Data":"c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360"} Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.524392 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqgr7" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.524414 4750 scope.go:117] "RemoveContainer" containerID="c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.524399 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqgr7" event={"ID":"98370313-aea8-4d01-9575-2857ab32765b","Type":"ContainerDied","Data":"8ec8a67dc8a309b27beb1db98fa306f37d28c1eef98b0dafb5bc00e097c0f0a3"} Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.580183 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqgr7"] Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.593832 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lqgr7"] Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.659784 4750 scope.go:117] "RemoveContainer" containerID="12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.721880 4750 scope.go:117] "RemoveContainer" containerID="72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.752780 4750 scope.go:117] "RemoveContainer" containerID="c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360" Feb 14 14:56:00 crc kubenswrapper[4750]: E0214 14:56:00.753341 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360\": container with ID starting with c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360 not found: ID does not exist" containerID="c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.753374 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360"} err="failed to get container status \"c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360\": rpc error: code = NotFound desc = could not find container \"c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360\": container with ID starting with c9bbd63e2fa996c320c88acc64904c9233137ab53888fc89f2101574082e2360 not found: ID does not exist" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.753392 4750 scope.go:117] "RemoveContainer" containerID="12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f" Feb 14 14:56:00 crc kubenswrapper[4750]: E0214 14:56:00.755256 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f\": container with ID starting with 12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f not found: ID does not exist" containerID="12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.755287 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f"} err="failed to get container status \"12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f\": rpc error: code = NotFound desc = could not find container \"12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f\": container with ID starting with 12b00649826181f558b7704ffb8cb91e44ef14d2cdf16f3554f955427afa835f not found: ID does not exist" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.755304 4750 scope.go:117] "RemoveContainer" containerID="72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d" Feb 14 14:56:00 crc kubenswrapper[4750]: E0214 14:56:00.755913 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d\": container with ID starting with 72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d not found: ID does not exist" containerID="72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.755938 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d"} err="failed to get container status \"72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d\": rpc error: code = NotFound desc = could not find container \"72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d\": container with ID starting with 72b40a188104f79e960606918a0b5daeb52311bbdb0e910434a947b44b40e38d not found: ID does not exist" Feb 14 14:56:00 crc kubenswrapper[4750]: I0214 14:56:00.755947 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98370313-aea8-4d01-9575-2857ab32765b" path="/var/lib/kubelet/pods/98370313-aea8-4d01-9575-2857ab32765b/volumes" Feb 14 14:56:12 crc kubenswrapper[4750]: I0214 14:56:12.742889 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:56:12 crc kubenswrapper[4750]: E0214 14:56:12.743874 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:56:23 crc kubenswrapper[4750]: I0214 14:56:23.742089 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:56:23 crc kubenswrapper[4750]: E0214 14:56:23.743158 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:56:37 crc kubenswrapper[4750]: I0214 14:56:37.742631 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:56:37 crc kubenswrapper[4750]: E0214 14:56:37.743504 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:56:51 crc kubenswrapper[4750]: I0214 14:56:51.741725 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:56:51 crc kubenswrapper[4750]: E0214 14:56:51.742705 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:57:02 crc kubenswrapper[4750]: I0214 14:57:02.743344 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:57:02 crc kubenswrapper[4750]: E0214 14:57:02.744506 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:57:14 crc kubenswrapper[4750]: I0214 14:57:14.743942 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:57:14 crc kubenswrapper[4750]: E0214 14:57:14.745109 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:57:25 crc kubenswrapper[4750]: I0214 14:57:25.741902 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:57:25 crc kubenswrapper[4750]: E0214 14:57:25.742888 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:57:36 crc kubenswrapper[4750]: I0214 14:57:36.742601 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:57:36 crc kubenswrapper[4750]: E0214 14:57:36.743343 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:57:51 crc kubenswrapper[4750]: I0214 14:57:51.742235 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:57:51 crc kubenswrapper[4750]: E0214 14:57:51.743308 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 14:58:04 crc kubenswrapper[4750]: I0214 14:58:04.742562 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 14:58:05 crc kubenswrapper[4750]: I0214 14:58:05.023067 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"06e2775236d82937944064efee4867514a0a7d5ecb5012b9dcc99bc6d8367104"} Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.332569 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lrkp9"] Feb 14 14:58:54 crc kubenswrapper[4750]: E0214 14:58:54.333695 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98370313-aea8-4d01-9575-2857ab32765b" containerName="registry-server" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.333712 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="98370313-aea8-4d01-9575-2857ab32765b" containerName="registry-server" Feb 14 14:58:54 crc kubenswrapper[4750]: E0214 14:58:54.333732 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98370313-aea8-4d01-9575-2857ab32765b" containerName="extract-utilities" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.333741 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="98370313-aea8-4d01-9575-2857ab32765b" containerName="extract-utilities" Feb 14 14:58:54 crc kubenswrapper[4750]: E0214 14:58:54.333782 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98370313-aea8-4d01-9575-2857ab32765b" containerName="extract-content" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.333792 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="98370313-aea8-4d01-9575-2857ab32765b" containerName="extract-content" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.334057 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="98370313-aea8-4d01-9575-2857ab32765b" containerName="registry-server" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.336224 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.349984 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrkp9"] Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.519835 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjtbk\" (UniqueName: \"kubernetes.io/projected/c84ef440-f15d-4765-b360-fe0cfcc63769-kube-api-access-qjtbk\") pod \"redhat-operators-lrkp9\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.520358 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-utilities\") pod \"redhat-operators-lrkp9\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.520418 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-catalog-content\") pod \"redhat-operators-lrkp9\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.622313 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjtbk\" (UniqueName: \"kubernetes.io/projected/c84ef440-f15d-4765-b360-fe0cfcc63769-kube-api-access-qjtbk\") pod \"redhat-operators-lrkp9\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.622533 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-utilities\") pod \"redhat-operators-lrkp9\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.622583 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-catalog-content\") pod \"redhat-operators-lrkp9\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.623146 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-catalog-content\") pod \"redhat-operators-lrkp9\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.623692 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-utilities\") pod \"redhat-operators-lrkp9\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.658124 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjtbk\" (UniqueName: \"kubernetes.io/projected/c84ef440-f15d-4765-b360-fe0cfcc63769-kube-api-access-qjtbk\") pod \"redhat-operators-lrkp9\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:54 crc kubenswrapper[4750]: I0214 14:58:54.658875 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:58:55 crc kubenswrapper[4750]: I0214 14:58:55.963532 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrkp9"] Feb 14 14:58:56 crc kubenswrapper[4750]: I0214 14:58:56.632795 4750 generic.go:334] "Generic (PLEG): container finished" podID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerID="2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37" exitCode=0 Feb 14 14:58:56 crc kubenswrapper[4750]: I0214 14:58:56.632871 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkp9" event={"ID":"c84ef440-f15d-4765-b360-fe0cfcc63769","Type":"ContainerDied","Data":"2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37"} Feb 14 14:58:56 crc kubenswrapper[4750]: I0214 14:58:56.633321 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkp9" event={"ID":"c84ef440-f15d-4765-b360-fe0cfcc63769","Type":"ContainerStarted","Data":"9ebcb77e6523ea5fda282990998ba3a0aeddb6904ce0bf892014dddd4d69d661"} Feb 14 14:58:56 crc kubenswrapper[4750]: I0214 14:58:56.636567 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 14:58:58 crc kubenswrapper[4750]: I0214 14:58:58.657674 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkp9" event={"ID":"c84ef440-f15d-4765-b360-fe0cfcc63769","Type":"ContainerStarted","Data":"963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d"} Feb 14 14:59:02 crc kubenswrapper[4750]: I0214 14:59:02.772730 4750 generic.go:334] "Generic (PLEG): container finished" podID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerID="963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d" exitCode=0 Feb 14 14:59:02 crc kubenswrapper[4750]: I0214 14:59:02.774408 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkp9" event={"ID":"c84ef440-f15d-4765-b360-fe0cfcc63769","Type":"ContainerDied","Data":"963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d"} Feb 14 14:59:03 crc kubenswrapper[4750]: I0214 14:59:03.786763 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkp9" event={"ID":"c84ef440-f15d-4765-b360-fe0cfcc63769","Type":"ContainerStarted","Data":"ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7"} Feb 14 14:59:03 crc kubenswrapper[4750]: I0214 14:59:03.825430 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lrkp9" podStartSLOduration=3.23005671 podStartE2EDuration="9.825400654s" podCreationTimestamp="2026-02-14 14:58:54 +0000 UTC" firstStartedPulling="2026-02-14 14:58:56.636330781 +0000 UTC m=+4008.662320262" lastFinishedPulling="2026-02-14 14:59:03.231674685 +0000 UTC m=+4015.257664206" observedRunningTime="2026-02-14 14:59:03.812443346 +0000 UTC m=+4015.838432837" watchObservedRunningTime="2026-02-14 14:59:03.825400654 +0000 UTC m=+4015.851390175" Feb 14 14:59:04 crc kubenswrapper[4750]: I0214 14:59:04.659923 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:59:04 crc kubenswrapper[4750]: I0214 14:59:04.660419 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:59:05 crc kubenswrapper[4750]: I0214 14:59:05.710885 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lrkp9" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="registry-server" probeResult="failure" output=< Feb 14 14:59:05 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:59:05 crc kubenswrapper[4750]: > Feb 14 14:59:15 crc kubenswrapper[4750]: I0214 14:59:15.709875 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lrkp9" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="registry-server" probeResult="failure" output=< Feb 14 14:59:15 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:59:15 crc kubenswrapper[4750]: > Feb 14 14:59:25 crc kubenswrapper[4750]: I0214 14:59:25.713306 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lrkp9" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="registry-server" probeResult="failure" output=< Feb 14 14:59:25 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 14:59:25 crc kubenswrapper[4750]: > Feb 14 14:59:34 crc kubenswrapper[4750]: I0214 14:59:34.714364 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:59:34 crc kubenswrapper[4750]: I0214 14:59:34.772206 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:59:34 crc kubenswrapper[4750]: I0214 14:59:34.956196 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrkp9"] Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.167075 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lrkp9" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="registry-server" containerID="cri-o://ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7" gracePeriod=2 Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.755350 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.818100 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-catalog-content\") pod \"c84ef440-f15d-4765-b360-fe0cfcc63769\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.818184 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-utilities\") pod \"c84ef440-f15d-4765-b360-fe0cfcc63769\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.818381 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjtbk\" (UniqueName: \"kubernetes.io/projected/c84ef440-f15d-4765-b360-fe0cfcc63769-kube-api-access-qjtbk\") pod \"c84ef440-f15d-4765-b360-fe0cfcc63769\" (UID: \"c84ef440-f15d-4765-b360-fe0cfcc63769\") " Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.819265 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-utilities" (OuterVolumeSpecName: "utilities") pod "c84ef440-f15d-4765-b360-fe0cfcc63769" (UID: "c84ef440-f15d-4765-b360-fe0cfcc63769"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.820662 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.826751 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c84ef440-f15d-4765-b360-fe0cfcc63769-kube-api-access-qjtbk" (OuterVolumeSpecName: "kube-api-access-qjtbk") pod "c84ef440-f15d-4765-b360-fe0cfcc63769" (UID: "c84ef440-f15d-4765-b360-fe0cfcc63769"). InnerVolumeSpecName "kube-api-access-qjtbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.921844 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjtbk\" (UniqueName: \"kubernetes.io/projected/c84ef440-f15d-4765-b360-fe0cfcc63769-kube-api-access-qjtbk\") on node \"crc\" DevicePath \"\"" Feb 14 14:59:36 crc kubenswrapper[4750]: I0214 14:59:36.934564 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c84ef440-f15d-4765-b360-fe0cfcc63769" (UID: "c84ef440-f15d-4765-b360-fe0cfcc63769"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.025718 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84ef440-f15d-4765-b360-fe0cfcc63769-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.181922 4750 generic.go:334] "Generic (PLEG): container finished" podID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerID="ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7" exitCode=0 Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.182003 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkp9" event={"ID":"c84ef440-f15d-4765-b360-fe0cfcc63769","Type":"ContainerDied","Data":"ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7"} Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.182057 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkp9" event={"ID":"c84ef440-f15d-4765-b360-fe0cfcc63769","Type":"ContainerDied","Data":"9ebcb77e6523ea5fda282990998ba3a0aeddb6904ce0bf892014dddd4d69d661"} Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.182094 4750 scope.go:117] "RemoveContainer" containerID="ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.182425 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrkp9" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.229408 4750 scope.go:117] "RemoveContainer" containerID="963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.249912 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrkp9"] Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.260878 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lrkp9"] Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.270522 4750 scope.go:117] "RemoveContainer" containerID="2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.332334 4750 scope.go:117] "RemoveContainer" containerID="ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7" Feb 14 14:59:37 crc kubenswrapper[4750]: E0214 14:59:37.334022 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7\": container with ID starting with ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7 not found: ID does not exist" containerID="ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.334070 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7"} err="failed to get container status \"ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7\": rpc error: code = NotFound desc = could not find container \"ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7\": container with ID starting with ad62d25e0a39d43bfb997a88988e52fbb2aadfa6ac8b6015e6597eb2562fdbd7 not found: ID does not exist" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.334313 4750 scope.go:117] "RemoveContainer" containerID="963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d" Feb 14 14:59:37 crc kubenswrapper[4750]: E0214 14:59:37.335293 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d\": container with ID starting with 963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d not found: ID does not exist" containerID="963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.335317 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d"} err="failed to get container status \"963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d\": rpc error: code = NotFound desc = could not find container \"963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d\": container with ID starting with 963c6737ca59d57b168e87d9d6a4931fe197b4043fc413b8d4f47f996fe2297d not found: ID does not exist" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.335330 4750 scope.go:117] "RemoveContainer" containerID="2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37" Feb 14 14:59:37 crc kubenswrapper[4750]: E0214 14:59:37.335627 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37\": container with ID starting with 2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37 not found: ID does not exist" containerID="2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37" Feb 14 14:59:37 crc kubenswrapper[4750]: I0214 14:59:37.335669 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37"} err="failed to get container status \"2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37\": rpc error: code = NotFound desc = could not find container \"2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37\": container with ID starting with 2c61ef538f5646099ebbd44dbfcfda90dd6684c3ebb6975d54e6a62325db1a37 not found: ID does not exist" Feb 14 14:59:38 crc kubenswrapper[4750]: I0214 14:59:38.758092 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" path="/var/lib/kubelet/pods/c84ef440-f15d-4765-b360-fe0cfcc63769/volumes" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.246715 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg"] Feb 14 15:00:00 crc kubenswrapper[4750]: E0214 15:00:00.247788 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="registry-server" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.247809 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="registry-server" Feb 14 15:00:00 crc kubenswrapper[4750]: E0214 15:00:00.247856 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="extract-content" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.247865 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="extract-content" Feb 14 15:00:00 crc kubenswrapper[4750]: E0214 15:00:00.247891 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="extract-utilities" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.247903 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="extract-utilities" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.248167 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="c84ef440-f15d-4765-b360-fe0cfcc63769" containerName="registry-server" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.249041 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.251796 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.263347 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg"] Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.265787 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.326583 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f5ebaa-4091-449c-980c-e755e93f8094-secret-volume\") pod \"collect-profiles-29518020-rl7jg\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.326668 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdkjw\" (UniqueName: \"kubernetes.io/projected/18f5ebaa-4091-449c-980c-e755e93f8094-kube-api-access-mdkjw\") pod \"collect-profiles-29518020-rl7jg\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.326713 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f5ebaa-4091-449c-980c-e755e93f8094-config-volume\") pod \"collect-profiles-29518020-rl7jg\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.432045 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f5ebaa-4091-449c-980c-e755e93f8094-secret-volume\") pod \"collect-profiles-29518020-rl7jg\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.432211 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdkjw\" (UniqueName: \"kubernetes.io/projected/18f5ebaa-4091-449c-980c-e755e93f8094-kube-api-access-mdkjw\") pod \"collect-profiles-29518020-rl7jg\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.432272 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f5ebaa-4091-449c-980c-e755e93f8094-config-volume\") pod \"collect-profiles-29518020-rl7jg\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.449329 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f5ebaa-4091-449c-980c-e755e93f8094-config-volume\") pod \"collect-profiles-29518020-rl7jg\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.465533 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f5ebaa-4091-449c-980c-e755e93f8094-secret-volume\") pod \"collect-profiles-29518020-rl7jg\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.465677 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdkjw\" (UniqueName: \"kubernetes.io/projected/18f5ebaa-4091-449c-980c-e755e93f8094-kube-api-access-mdkjw\") pod \"collect-profiles-29518020-rl7jg\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:00 crc kubenswrapper[4750]: I0214 15:00:00.567749 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:01 crc kubenswrapper[4750]: I0214 15:00:01.067652 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg"] Feb 14 15:00:01 crc kubenswrapper[4750]: I0214 15:00:01.504800 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" event={"ID":"18f5ebaa-4091-449c-980c-e755e93f8094","Type":"ContainerStarted","Data":"b2858edb9cdbd83182ee17956e7acdc3cacf6a6b9346803ea9f95b30b080405c"} Feb 14 15:00:01 crc kubenswrapper[4750]: I0214 15:00:01.505152 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" event={"ID":"18f5ebaa-4091-449c-980c-e755e93f8094","Type":"ContainerStarted","Data":"14fde503008b61f822aa1f38a3d27af2712c0e8192e015dff426bf98554ad0f0"} Feb 14 15:00:01 crc kubenswrapper[4750]: I0214 15:00:01.519757 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" podStartSLOduration=1.519730217 podStartE2EDuration="1.519730217s" podCreationTimestamp="2026-02-14 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 15:00:01.516864036 +0000 UTC m=+4073.542853527" watchObservedRunningTime="2026-02-14 15:00:01.519730217 +0000 UTC m=+4073.545719698" Feb 14 15:00:02 crc kubenswrapper[4750]: I0214 15:00:02.517642 4750 generic.go:334] "Generic (PLEG): container finished" podID="18f5ebaa-4091-449c-980c-e755e93f8094" containerID="b2858edb9cdbd83182ee17956e7acdc3cacf6a6b9346803ea9f95b30b080405c" exitCode=0 Feb 14 15:00:02 crc kubenswrapper[4750]: I0214 15:00:02.518046 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" event={"ID":"18f5ebaa-4091-449c-980c-e755e93f8094","Type":"ContainerDied","Data":"b2858edb9cdbd83182ee17956e7acdc3cacf6a6b9346803ea9f95b30b080405c"} Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.014817 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.142973 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdkjw\" (UniqueName: \"kubernetes.io/projected/18f5ebaa-4091-449c-980c-e755e93f8094-kube-api-access-mdkjw\") pod \"18f5ebaa-4091-449c-980c-e755e93f8094\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.143261 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f5ebaa-4091-449c-980c-e755e93f8094-config-volume\") pod \"18f5ebaa-4091-449c-980c-e755e93f8094\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.143327 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f5ebaa-4091-449c-980c-e755e93f8094-secret-volume\") pod \"18f5ebaa-4091-449c-980c-e755e93f8094\" (UID: \"18f5ebaa-4091-449c-980c-e755e93f8094\") " Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.143748 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f5ebaa-4091-449c-980c-e755e93f8094-config-volume" (OuterVolumeSpecName: "config-volume") pod "18f5ebaa-4091-449c-980c-e755e93f8094" (UID: "18f5ebaa-4091-449c-980c-e755e93f8094"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.144456 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f5ebaa-4091-449c-980c-e755e93f8094-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.148790 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f5ebaa-4091-449c-980c-e755e93f8094-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18f5ebaa-4091-449c-980c-e755e93f8094" (UID: "18f5ebaa-4091-449c-980c-e755e93f8094"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.148908 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f5ebaa-4091-449c-980c-e755e93f8094-kube-api-access-mdkjw" (OuterVolumeSpecName: "kube-api-access-mdkjw") pod "18f5ebaa-4091-449c-980c-e755e93f8094" (UID: "18f5ebaa-4091-449c-980c-e755e93f8094"). InnerVolumeSpecName "kube-api-access-mdkjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.247004 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdkjw\" (UniqueName: \"kubernetes.io/projected/18f5ebaa-4091-449c-980c-e755e93f8094-kube-api-access-mdkjw\") on node \"crc\" DevicePath \"\"" Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.247381 4750 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f5ebaa-4091-449c-980c-e755e93f8094-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.543687 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" event={"ID":"18f5ebaa-4091-449c-980c-e755e93f8094","Type":"ContainerDied","Data":"14fde503008b61f822aa1f38a3d27af2712c0e8192e015dff426bf98554ad0f0"} Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.543733 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14fde503008b61f822aa1f38a3d27af2712c0e8192e015dff426bf98554ad0f0" Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.544081 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg" Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.602826 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks"] Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.618402 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517975-8rdks"] Feb 14 15:00:04 crc kubenswrapper[4750]: I0214 15:00:04.757290 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a812c0-eacc-43bd-822e-95d43a8882e3" path="/var/lib/kubelet/pods/46a812c0-eacc-43bd-822e-95d43a8882e3/volumes" Feb 14 15:00:27 crc kubenswrapper[4750]: I0214 15:00:27.905806 4750 scope.go:117] "RemoveContainer" containerID="0a7d7933a6ea3c604b242865bfbd5886145889b28c05ee044a6b885770a1ed1d" Feb 14 15:00:30 crc kubenswrapper[4750]: I0214 15:00:30.129255 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:00:30 crc kubenswrapper[4750]: I0214 15:00:30.130080 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.129567 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.130229 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.178788 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29518021-6v549"] Feb 14 15:01:00 crc kubenswrapper[4750]: E0214 15:01:00.179525 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f5ebaa-4091-449c-980c-e755e93f8094" containerName="collect-profiles" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.179550 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f5ebaa-4091-449c-980c-e755e93f8094" containerName="collect-profiles" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.179894 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f5ebaa-4091-449c-980c-e755e93f8094" containerName="collect-profiles" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.180885 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.197673 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29518021-6v549"] Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.345804 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72d7v\" (UniqueName: \"kubernetes.io/projected/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-kube-api-access-72d7v\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.345854 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-combined-ca-bundle\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.346009 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-fernet-keys\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.346550 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-config-data\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.448315 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-combined-ca-bundle\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.448427 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-fernet-keys\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.448580 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-config-data\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.448650 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72d7v\" (UniqueName: \"kubernetes.io/projected/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-kube-api-access-72d7v\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.454850 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-config-data\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.455301 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-combined-ca-bundle\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.456235 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-fernet-keys\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.467903 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72d7v\" (UniqueName: \"kubernetes.io/projected/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-kube-api-access-72d7v\") pod \"keystone-cron-29518021-6v549\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:00 crc kubenswrapper[4750]: I0214 15:01:00.515776 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:01 crc kubenswrapper[4750]: I0214 15:01:01.017017 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29518021-6v549"] Feb 14 15:01:01 crc kubenswrapper[4750]: I0214 15:01:01.206882 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29518021-6v549" event={"ID":"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a","Type":"ContainerStarted","Data":"9f10da0c4e46e0701f18f0aa8211c8f7b04f904c752dd389e62da3538865352d"} Feb 14 15:01:02 crc kubenswrapper[4750]: I0214 15:01:02.221828 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29518021-6v549" event={"ID":"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a","Type":"ContainerStarted","Data":"dbcc33da9fcdeb1b5eaecca653588e10d15deb45799b1c75d6d52ba9a77d7885"} Feb 14 15:01:02 crc kubenswrapper[4750]: I0214 15:01:02.246605 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29518021-6v549" podStartSLOduration=2.246562892 podStartE2EDuration="2.246562892s" podCreationTimestamp="2026-02-14 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 15:01:02.239555333 +0000 UTC m=+4134.265544824" watchObservedRunningTime="2026-02-14 15:01:02.246562892 +0000 UTC m=+4134.272552463" Feb 14 15:01:05 crc kubenswrapper[4750]: I0214 15:01:05.254684 4750 generic.go:334] "Generic (PLEG): container finished" podID="4718c9f5-bcf2-48a7-bd19-6a97de6ed02a" containerID="dbcc33da9fcdeb1b5eaecca653588e10d15deb45799b1c75d6d52ba9a77d7885" exitCode=0 Feb 14 15:01:05 crc kubenswrapper[4750]: I0214 15:01:05.254838 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29518021-6v549" event={"ID":"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a","Type":"ContainerDied","Data":"dbcc33da9fcdeb1b5eaecca653588e10d15deb45799b1c75d6d52ba9a77d7885"} Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.704460 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.813439 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72d7v\" (UniqueName: \"kubernetes.io/projected/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-kube-api-access-72d7v\") pod \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.813583 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-config-data\") pod \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.813747 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-fernet-keys\") pod \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.814177 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-combined-ca-bundle\") pod \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\" (UID: \"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a\") " Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.820557 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-kube-api-access-72d7v" (OuterVolumeSpecName: "kube-api-access-72d7v") pod "4718c9f5-bcf2-48a7-bd19-6a97de6ed02a" (UID: "4718c9f5-bcf2-48a7-bd19-6a97de6ed02a"). InnerVolumeSpecName "kube-api-access-72d7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.821269 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4718c9f5-bcf2-48a7-bd19-6a97de6ed02a" (UID: "4718c9f5-bcf2-48a7-bd19-6a97de6ed02a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.849386 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4718c9f5-bcf2-48a7-bd19-6a97de6ed02a" (UID: "4718c9f5-bcf2-48a7-bd19-6a97de6ed02a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.892412 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-config-data" (OuterVolumeSpecName: "config-data") pod "4718c9f5-bcf2-48a7-bd19-6a97de6ed02a" (UID: "4718c9f5-bcf2-48a7-bd19-6a97de6ed02a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.918153 4750 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.918182 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72d7v\" (UniqueName: \"kubernetes.io/projected/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-kube-api-access-72d7v\") on node \"crc\" DevicePath \"\"" Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.918208 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 15:01:06 crc kubenswrapper[4750]: I0214 15:01:06.918217 4750 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4718c9f5-bcf2-48a7-bd19-6a97de6ed02a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 15:01:07 crc kubenswrapper[4750]: I0214 15:01:07.294522 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29518021-6v549" event={"ID":"4718c9f5-bcf2-48a7-bd19-6a97de6ed02a","Type":"ContainerDied","Data":"9f10da0c4e46e0701f18f0aa8211c8f7b04f904c752dd389e62da3538865352d"} Feb 14 15:01:07 crc kubenswrapper[4750]: I0214 15:01:07.294814 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f10da0c4e46e0701f18f0aa8211c8f7b04f904c752dd389e62da3538865352d" Feb 14 15:01:07 crc kubenswrapper[4750]: I0214 15:01:07.294591 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29518021-6v549" Feb 14 15:01:30 crc kubenswrapper[4750]: I0214 15:01:30.129663 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:01:30 crc kubenswrapper[4750]: I0214 15:01:30.130940 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:01:30 crc kubenswrapper[4750]: I0214 15:01:30.131041 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 15:01:30 crc kubenswrapper[4750]: I0214 15:01:30.132842 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06e2775236d82937944064efee4867514a0a7d5ecb5012b9dcc99bc6d8367104"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 15:01:30 crc kubenswrapper[4750]: I0214 15:01:30.132965 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://06e2775236d82937944064efee4867514a0a7d5ecb5012b9dcc99bc6d8367104" gracePeriod=600 Feb 14 15:01:30 crc kubenswrapper[4750]: I0214 15:01:30.794086 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="06e2775236d82937944064efee4867514a0a7d5ecb5012b9dcc99bc6d8367104" exitCode=0 Feb 14 15:01:30 crc kubenswrapper[4750]: I0214 15:01:30.794296 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"06e2775236d82937944064efee4867514a0a7d5ecb5012b9dcc99bc6d8367104"} Feb 14 15:01:30 crc kubenswrapper[4750]: I0214 15:01:30.794428 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af"} Feb 14 15:01:30 crc kubenswrapper[4750]: I0214 15:01:30.794453 4750 scope.go:117] "RemoveContainer" containerID="37a617d59356f40343391a2b3afe6cb1fee29bba81dee9374e618a8942dc9d5b" Feb 14 15:02:14 crc kubenswrapper[4750]: E0214 15:02:14.698634 4750 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:46278->38.102.83.36:35453: write tcp 38.102.83.36:46278->38.102.83.36:35453: write: connection reset by peer Feb 14 15:02:17 crc kubenswrapper[4750]: E0214 15:02:17.770566 4750 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:46364->38.102.83.36:35453: write tcp 38.102.83.36:46364->38.102.83.36:35453: write: broken pipe Feb 14 15:03:30 crc kubenswrapper[4750]: I0214 15:03:30.129140 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:03:30 crc kubenswrapper[4750]: I0214 15:03:30.129857 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:03:51 crc kubenswrapper[4750]: E0214 15:03:51.275568 4750 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:56318->38.102.83.36:35453: write tcp 38.102.83.36:56318->38.102.83.36:35453: write: connection reset by peer Feb 14 15:04:00 crc kubenswrapper[4750]: I0214 15:04:00.128978 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:04:00 crc kubenswrapper[4750]: I0214 15:04:00.129426 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:04:02 crc kubenswrapper[4750]: I0214 15:04:02.943104 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vtzr9"] Feb 14 15:04:02 crc kubenswrapper[4750]: E0214 15:04:02.945499 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4718c9f5-bcf2-48a7-bd19-6a97de6ed02a" containerName="keystone-cron" Feb 14 15:04:02 crc kubenswrapper[4750]: I0214 15:04:02.945614 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="4718c9f5-bcf2-48a7-bd19-6a97de6ed02a" containerName="keystone-cron" Feb 14 15:04:02 crc kubenswrapper[4750]: I0214 15:04:02.946059 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="4718c9f5-bcf2-48a7-bd19-6a97de6ed02a" containerName="keystone-cron" Feb 14 15:04:02 crc kubenswrapper[4750]: I0214 15:04:02.948412 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:02 crc kubenswrapper[4750]: I0214 15:04:02.968690 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtzr9"] Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.083183 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-utilities\") pod \"redhat-marketplace-vtzr9\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.083563 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-catalog-content\") pod \"redhat-marketplace-vtzr9\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.083676 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5m4l\" (UniqueName: \"kubernetes.io/projected/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-kube-api-access-b5m4l\") pod \"redhat-marketplace-vtzr9\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.191070 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-catalog-content\") pod \"redhat-marketplace-vtzr9\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.191245 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5m4l\" (UniqueName: \"kubernetes.io/projected/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-kube-api-access-b5m4l\") pod \"redhat-marketplace-vtzr9\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.191487 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-utilities\") pod \"redhat-marketplace-vtzr9\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.192533 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-utilities\") pod \"redhat-marketplace-vtzr9\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.192827 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-catalog-content\") pod \"redhat-marketplace-vtzr9\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.224846 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5m4l\" (UniqueName: \"kubernetes.io/projected/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-kube-api-access-b5m4l\") pod \"redhat-marketplace-vtzr9\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.284299 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:03 crc kubenswrapper[4750]: I0214 15:04:03.755264 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtzr9"] Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.347625 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l2vn5"] Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.353494 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.374633 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2vn5"] Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.524198 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-utilities\") pod \"community-operators-l2vn5\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.524482 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68jhm\" (UniqueName: \"kubernetes.io/projected/f765929e-ed49-47b3-9a5a-9ec46ca94f07-kube-api-access-68jhm\") pod \"community-operators-l2vn5\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.524618 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-catalog-content\") pod \"community-operators-l2vn5\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.626847 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-utilities\") pod \"community-operators-l2vn5\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.626952 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68jhm\" (UniqueName: \"kubernetes.io/projected/f765929e-ed49-47b3-9a5a-9ec46ca94f07-kube-api-access-68jhm\") pod \"community-operators-l2vn5\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.627029 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-catalog-content\") pod \"community-operators-l2vn5\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.627505 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-utilities\") pod \"community-operators-l2vn5\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.627564 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-catalog-content\") pod \"community-operators-l2vn5\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.645940 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68jhm\" (UniqueName: \"kubernetes.io/projected/f765929e-ed49-47b3-9a5a-9ec46ca94f07-kube-api-access-68jhm\") pod \"community-operators-l2vn5\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.730141 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.825393 4750 generic.go:334] "Generic (PLEG): container finished" podID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerID="17a3ffe6b25c51d75ae41057e049f4648c8b62c4ce7f694db146ed20d469ec6d" exitCode=0 Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.825701 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtzr9" event={"ID":"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4","Type":"ContainerDied","Data":"17a3ffe6b25c51d75ae41057e049f4648c8b62c4ce7f694db146ed20d469ec6d"} Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.825735 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtzr9" event={"ID":"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4","Type":"ContainerStarted","Data":"eef81adae09a62341f1e390bc05ff1d9af940bc1cd283d4711cbf598eeafdd91"} Feb 14 15:04:04 crc kubenswrapper[4750]: I0214 15:04:04.831226 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 15:04:05 crc kubenswrapper[4750]: I0214 15:04:05.404763 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2vn5"] Feb 14 15:04:05 crc kubenswrapper[4750]: I0214 15:04:05.841253 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtzr9" event={"ID":"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4","Type":"ContainerStarted","Data":"c394dcf7a214b8376ba1f953e974d9e4e86a2453c0c912bfb705e55bfd3cc11f"} Feb 14 15:04:05 crc kubenswrapper[4750]: I0214 15:04:05.843609 4750 generic.go:334] "Generic (PLEG): container finished" podID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerID="2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b" exitCode=0 Feb 14 15:04:05 crc kubenswrapper[4750]: I0214 15:04:05.843656 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2vn5" event={"ID":"f765929e-ed49-47b3-9a5a-9ec46ca94f07","Type":"ContainerDied","Data":"2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b"} Feb 14 15:04:05 crc kubenswrapper[4750]: I0214 15:04:05.843681 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2vn5" event={"ID":"f765929e-ed49-47b3-9a5a-9ec46ca94f07","Type":"ContainerStarted","Data":"3ab4d1019b653d4ed26582f11846842b1999112dde4cc4c69db527680c5aaeb6"} Feb 14 15:04:06 crc kubenswrapper[4750]: I0214 15:04:06.856668 4750 generic.go:334] "Generic (PLEG): container finished" podID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerID="c394dcf7a214b8376ba1f953e974d9e4e86a2453c0c912bfb705e55bfd3cc11f" exitCode=0 Feb 14 15:04:06 crc kubenswrapper[4750]: I0214 15:04:06.856762 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtzr9" event={"ID":"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4","Type":"ContainerDied","Data":"c394dcf7a214b8376ba1f953e974d9e4e86a2453c0c912bfb705e55bfd3cc11f"} Feb 14 15:04:06 crc kubenswrapper[4750]: I0214 15:04:06.858710 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2vn5" event={"ID":"f765929e-ed49-47b3-9a5a-9ec46ca94f07","Type":"ContainerStarted","Data":"586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa"} Feb 14 15:04:07 crc kubenswrapper[4750]: I0214 15:04:07.871631 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtzr9" event={"ID":"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4","Type":"ContainerStarted","Data":"bbfd9752503aa5f0a4e1bf431432d28764fc9611bc6fa9104db7b41e4656c91f"} Feb 14 15:04:07 crc kubenswrapper[4750]: I0214 15:04:07.907931 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vtzr9" podStartSLOduration=3.471468132 podStartE2EDuration="5.907910561s" podCreationTimestamp="2026-02-14 15:04:02 +0000 UTC" firstStartedPulling="2026-02-14 15:04:04.830996096 +0000 UTC m=+4316.856985577" lastFinishedPulling="2026-02-14 15:04:07.267438515 +0000 UTC m=+4319.293428006" observedRunningTime="2026-02-14 15:04:07.894708977 +0000 UTC m=+4319.920698498" watchObservedRunningTime="2026-02-14 15:04:07.907910561 +0000 UTC m=+4319.933900042" Feb 14 15:04:08 crc kubenswrapper[4750]: I0214 15:04:08.884828 4750 generic.go:334] "Generic (PLEG): container finished" podID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerID="586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa" exitCode=0 Feb 14 15:04:08 crc kubenswrapper[4750]: I0214 15:04:08.885562 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2vn5" event={"ID":"f765929e-ed49-47b3-9a5a-9ec46ca94f07","Type":"ContainerDied","Data":"586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa"} Feb 14 15:04:09 crc kubenswrapper[4750]: I0214 15:04:09.897376 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2vn5" event={"ID":"f765929e-ed49-47b3-9a5a-9ec46ca94f07","Type":"ContainerStarted","Data":"3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb"} Feb 14 15:04:13 crc kubenswrapper[4750]: I0214 15:04:13.285212 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:13 crc kubenswrapper[4750]: I0214 15:04:13.285671 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:14 crc kubenswrapper[4750]: I0214 15:04:14.351629 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vtzr9" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerName="registry-server" probeResult="failure" output=< Feb 14 15:04:14 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:04:14 crc kubenswrapper[4750]: > Feb 14 15:04:14 crc kubenswrapper[4750]: I0214 15:04:14.731838 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:14 crc kubenswrapper[4750]: I0214 15:04:14.732242 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:15 crc kubenswrapper[4750]: I0214 15:04:15.818041 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l2vn5" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerName="registry-server" probeResult="failure" output=< Feb 14 15:04:15 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:04:15 crc kubenswrapper[4750]: > Feb 14 15:04:23 crc kubenswrapper[4750]: I0214 15:04:23.410943 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:23 crc kubenswrapper[4750]: I0214 15:04:23.451517 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l2vn5" podStartSLOduration=16.008034365 podStartE2EDuration="19.45149815s" podCreationTimestamp="2026-02-14 15:04:04 +0000 UTC" firstStartedPulling="2026-02-14 15:04:05.845514126 +0000 UTC m=+4317.871503627" lastFinishedPulling="2026-02-14 15:04:09.288977931 +0000 UTC m=+4321.314967412" observedRunningTime="2026-02-14 15:04:09.916712166 +0000 UTC m=+4321.942701677" watchObservedRunningTime="2026-02-14 15:04:23.45149815 +0000 UTC m=+4335.477487631" Feb 14 15:04:23 crc kubenswrapper[4750]: I0214 15:04:23.534559 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:23 crc kubenswrapper[4750]: I0214 15:04:23.655072 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtzr9"] Feb 14 15:04:24 crc kubenswrapper[4750]: I0214 15:04:24.785434 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:24 crc kubenswrapper[4750]: I0214 15:04:24.845339 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:25 crc kubenswrapper[4750]: I0214 15:04:25.058601 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vtzr9" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerName="registry-server" containerID="cri-o://bbfd9752503aa5f0a4e1bf431432d28764fc9611bc6fa9104db7b41e4656c91f" gracePeriod=2 Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.096381 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2vn5"] Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.104478 4750 generic.go:334] "Generic (PLEG): container finished" podID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerID="bbfd9752503aa5f0a4e1bf431432d28764fc9611bc6fa9104db7b41e4656c91f" exitCode=0 Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.104617 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtzr9" event={"ID":"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4","Type":"ContainerDied","Data":"bbfd9752503aa5f0a4e1bf431432d28764fc9611bc6fa9104db7b41e4656c91f"} Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.105014 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l2vn5" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerName="registry-server" containerID="cri-o://3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb" gracePeriod=2 Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.414286 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.484889 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-catalog-content\") pod \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.484990 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5m4l\" (UniqueName: \"kubernetes.io/projected/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-kube-api-access-b5m4l\") pod \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.491942 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-kube-api-access-b5m4l" (OuterVolumeSpecName: "kube-api-access-b5m4l") pod "518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" (UID: "518a3b6d-d912-4e2f-8d9a-e9f40095dcd4"). InnerVolumeSpecName "kube-api-access-b5m4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.510229 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" (UID: "518a3b6d-d912-4e2f-8d9a-e9f40095dcd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.587686 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-utilities\") pod \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\" (UID: \"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4\") " Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.588225 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-utilities" (OuterVolumeSpecName: "utilities") pod "518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" (UID: "518a3b6d-d912-4e2f-8d9a-e9f40095dcd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.588689 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5m4l\" (UniqueName: \"kubernetes.io/projected/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-kube-api-access-b5m4l\") on node \"crc\" DevicePath \"\"" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.588841 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.588948 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.634506 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.690168 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-catalog-content\") pod \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.690280 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-utilities\") pod \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.690508 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68jhm\" (UniqueName: \"kubernetes.io/projected/f765929e-ed49-47b3-9a5a-9ec46ca94f07-kube-api-access-68jhm\") pod \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\" (UID: \"f765929e-ed49-47b3-9a5a-9ec46ca94f07\") " Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.691827 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-utilities" (OuterVolumeSpecName: "utilities") pod "f765929e-ed49-47b3-9a5a-9ec46ca94f07" (UID: "f765929e-ed49-47b3-9a5a-9ec46ca94f07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.696019 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f765929e-ed49-47b3-9a5a-9ec46ca94f07-kube-api-access-68jhm" (OuterVolumeSpecName: "kube-api-access-68jhm") pod "f765929e-ed49-47b3-9a5a-9ec46ca94f07" (UID: "f765929e-ed49-47b3-9a5a-9ec46ca94f07"). InnerVolumeSpecName "kube-api-access-68jhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.758320 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f765929e-ed49-47b3-9a5a-9ec46ca94f07" (UID: "f765929e-ed49-47b3-9a5a-9ec46ca94f07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.793599 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68jhm\" (UniqueName: \"kubernetes.io/projected/f765929e-ed49-47b3-9a5a-9ec46ca94f07-kube-api-access-68jhm\") on node \"crc\" DevicePath \"\"" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.793639 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:04:26 crc kubenswrapper[4750]: I0214 15:04:26.793653 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765929e-ed49-47b3-9a5a-9ec46ca94f07-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.128262 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtzr9" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.128225 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtzr9" event={"ID":"518a3b6d-d912-4e2f-8d9a-e9f40095dcd4","Type":"ContainerDied","Data":"eef81adae09a62341f1e390bc05ff1d9af940bc1cd283d4711cbf598eeafdd91"} Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.129422 4750 scope.go:117] "RemoveContainer" containerID="bbfd9752503aa5f0a4e1bf431432d28764fc9611bc6fa9104db7b41e4656c91f" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.133307 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2vn5" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.133314 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2vn5" event={"ID":"f765929e-ed49-47b3-9a5a-9ec46ca94f07","Type":"ContainerDied","Data":"3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb"} Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.133187 4750 generic.go:334] "Generic (PLEG): container finished" podID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerID="3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb" exitCode=0 Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.134379 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2vn5" event={"ID":"f765929e-ed49-47b3-9a5a-9ec46ca94f07","Type":"ContainerDied","Data":"3ab4d1019b653d4ed26582f11846842b1999112dde4cc4c69db527680c5aaeb6"} Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.180071 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtzr9"] Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.180719 4750 scope.go:117] "RemoveContainer" containerID="c394dcf7a214b8376ba1f953e974d9e4e86a2453c0c912bfb705e55bfd3cc11f" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.190841 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtzr9"] Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.201044 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2vn5"] Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.213471 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l2vn5"] Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.228556 4750 scope.go:117] "RemoveContainer" containerID="17a3ffe6b25c51d75ae41057e049f4648c8b62c4ce7f694db146ed20d469ec6d" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.308071 4750 scope.go:117] "RemoveContainer" containerID="3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.383100 4750 scope.go:117] "RemoveContainer" containerID="586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.405742 4750 scope.go:117] "RemoveContainer" containerID="2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.465200 4750 scope.go:117] "RemoveContainer" containerID="3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb" Feb 14 15:04:27 crc kubenswrapper[4750]: E0214 15:04:27.465629 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb\": container with ID starting with 3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb not found: ID does not exist" containerID="3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.465688 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb"} err="failed to get container status \"3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb\": rpc error: code = NotFound desc = could not find container \"3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb\": container with ID starting with 3ff4bc99b758e98f8a6c163a2ab1c92c564d7ecc319252ceaddffc07ee2201eb not found: ID does not exist" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.465722 4750 scope.go:117] "RemoveContainer" containerID="586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa" Feb 14 15:04:27 crc kubenswrapper[4750]: E0214 15:04:27.466024 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa\": container with ID starting with 586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa not found: ID does not exist" containerID="586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.466063 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa"} err="failed to get container status \"586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa\": rpc error: code = NotFound desc = could not find container \"586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa\": container with ID starting with 586a0f04ba3db9bb0801e7b5f27deb95bafdca4dbc5451af2f34e670a6ef3afa not found: ID does not exist" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.466090 4750 scope.go:117] "RemoveContainer" containerID="2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b" Feb 14 15:04:27 crc kubenswrapper[4750]: E0214 15:04:27.466326 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b\": container with ID starting with 2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b not found: ID does not exist" containerID="2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b" Feb 14 15:04:27 crc kubenswrapper[4750]: I0214 15:04:27.466359 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b"} err="failed to get container status \"2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b\": rpc error: code = NotFound desc = could not find container \"2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b\": container with ID starting with 2aea5aa8b5282fc080ba7491e25816afb69db93a0b5684e22e61474825c1727b not found: ID does not exist" Feb 14 15:04:28 crc kubenswrapper[4750]: I0214 15:04:28.766542 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" path="/var/lib/kubelet/pods/518a3b6d-d912-4e2f-8d9a-e9f40095dcd4/volumes" Feb 14 15:04:28 crc kubenswrapper[4750]: I0214 15:04:28.767631 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" path="/var/lib/kubelet/pods/f765929e-ed49-47b3-9a5a-9ec46ca94f07/volumes" Feb 14 15:04:30 crc kubenswrapper[4750]: I0214 15:04:30.129273 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:04:30 crc kubenswrapper[4750]: I0214 15:04:30.130847 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:04:30 crc kubenswrapper[4750]: I0214 15:04:30.130932 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 15:04:30 crc kubenswrapper[4750]: I0214 15:04:30.132045 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 15:04:30 crc kubenswrapper[4750]: I0214 15:04:30.132173 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" gracePeriod=600 Feb 14 15:04:30 crc kubenswrapper[4750]: E0214 15:04:30.885602 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:04:31 crc kubenswrapper[4750]: I0214 15:04:31.194725 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" exitCode=0 Feb 14 15:04:31 crc kubenswrapper[4750]: I0214 15:04:31.194775 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af"} Feb 14 15:04:31 crc kubenswrapper[4750]: I0214 15:04:31.194819 4750 scope.go:117] "RemoveContainer" containerID="06e2775236d82937944064efee4867514a0a7d5ecb5012b9dcc99bc6d8367104" Feb 14 15:04:31 crc kubenswrapper[4750]: I0214 15:04:31.195607 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:04:31 crc kubenswrapper[4750]: E0214 15:04:31.195966 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:04:42 crc kubenswrapper[4750]: I0214 15:04:42.743227 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:04:42 crc kubenswrapper[4750]: E0214 15:04:42.744305 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:04:56 crc kubenswrapper[4750]: I0214 15:04:56.742823 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:04:56 crc kubenswrapper[4750]: E0214 15:04:56.744243 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:05:09 crc kubenswrapper[4750]: I0214 15:05:09.742990 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:05:09 crc kubenswrapper[4750]: E0214 15:05:09.744479 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:05:23 crc kubenswrapper[4750]: I0214 15:05:23.741973 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:05:23 crc kubenswrapper[4750]: E0214 15:05:23.742981 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:05:38 crc kubenswrapper[4750]: I0214 15:05:38.750262 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:05:38 crc kubenswrapper[4750]: E0214 15:05:38.751148 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:05:52 crc kubenswrapper[4750]: I0214 15:05:52.742917 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:05:52 crc kubenswrapper[4750]: E0214 15:05:52.743943 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:06:05 crc kubenswrapper[4750]: I0214 15:06:05.743672 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:06:05 crc kubenswrapper[4750]: E0214 15:06:05.746476 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:06:20 crc kubenswrapper[4750]: I0214 15:06:20.743685 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:06:20 crc kubenswrapper[4750]: E0214 15:06:20.744975 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:06:33 crc kubenswrapper[4750]: I0214 15:06:33.742317 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:06:33 crc kubenswrapper[4750]: E0214 15:06:33.743397 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.369318 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 15:06:35 crc kubenswrapper[4750]: E0214 15:06:35.370406 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerName="registry-server" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.370422 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerName="registry-server" Feb 14 15:06:35 crc kubenswrapper[4750]: E0214 15:06:35.370431 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerName="extract-content" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.370437 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerName="extract-content" Feb 14 15:06:35 crc kubenswrapper[4750]: E0214 15:06:35.370460 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerName="extract-content" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.370466 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerName="extract-content" Feb 14 15:06:35 crc kubenswrapper[4750]: E0214 15:06:35.370476 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerName="extract-utilities" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.370482 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerName="extract-utilities" Feb 14 15:06:35 crc kubenswrapper[4750]: E0214 15:06:35.370501 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerName="extract-utilities" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.370506 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerName="extract-utilities" Feb 14 15:06:35 crc kubenswrapper[4750]: E0214 15:06:35.370534 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerName="registry-server" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.370540 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerName="registry-server" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.370767 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="518a3b6d-d912-4e2f-8d9a-e9f40095dcd4" containerName="registry-server" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.370798 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f765929e-ed49-47b3-9a5a-9ec46ca94f07" containerName="registry-server" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.371598 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.380196 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j6dvp" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.380428 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.380577 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.380741 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.390784 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.477871 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.477929 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.478004 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.478055 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-config-data\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.478149 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.478174 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.478198 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.478239 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plfb5\" (UniqueName: \"kubernetes.io/projected/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-kube-api-access-plfb5\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.478544 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.580746 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.580800 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.580830 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.580880 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plfb5\" (UniqueName: \"kubernetes.io/projected/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-kube-api-access-plfb5\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.580999 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.581080 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.581177 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.581257 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.581314 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-config-data\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.582043 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.584407 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.585582 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.586335 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-config-data\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.586431 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.589643 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.591730 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.594710 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.601615 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plfb5\" (UniqueName: \"kubernetes.io/projected/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-kube-api-access-plfb5\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.624358 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " pod="openstack/tempest-tests-tempest" Feb 14 15:06:35 crc kubenswrapper[4750]: I0214 15:06:35.714457 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 15:06:36 crc kubenswrapper[4750]: I0214 15:06:36.208004 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 15:06:36 crc kubenswrapper[4750]: I0214 15:06:36.841393 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e4d753a9-5bca-4940-9aa9-72a57f4f32a2","Type":"ContainerStarted","Data":"32f71c78ce9db0876b18bce2074c9ecadd6dc999e6e62ab73a24bf11bc776d05"} Feb 14 15:06:46 crc kubenswrapper[4750]: I0214 15:06:46.743009 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:06:46 crc kubenswrapper[4750]: E0214 15:06:46.744518 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.685370 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4v8dn"] Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.690156 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.711344 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4v8dn"] Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.765587 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-utilities\") pod \"certified-operators-4v8dn\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.765956 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-catalog-content\") pod \"certified-operators-4v8dn\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.765986 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjb2c\" (UniqueName: \"kubernetes.io/projected/1ffb2112-a324-4487-840a-4036723f8042-kube-api-access-qjb2c\") pod \"certified-operators-4v8dn\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.868637 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-catalog-content\") pod \"certified-operators-4v8dn\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.868679 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjb2c\" (UniqueName: \"kubernetes.io/projected/1ffb2112-a324-4487-840a-4036723f8042-kube-api-access-qjb2c\") pod \"certified-operators-4v8dn\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.868734 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-utilities\") pod \"certified-operators-4v8dn\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.869133 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-catalog-content\") pod \"certified-operators-4v8dn\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.869202 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-utilities\") pod \"certified-operators-4v8dn\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:53 crc kubenswrapper[4750]: I0214 15:06:53.903908 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjb2c\" (UniqueName: \"kubernetes.io/projected/1ffb2112-a324-4487-840a-4036723f8042-kube-api-access-qjb2c\") pod \"certified-operators-4v8dn\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:54 crc kubenswrapper[4750]: I0214 15:06:54.045711 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:06:58 crc kubenswrapper[4750]: I0214 15:06:58.752390 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:06:58 crc kubenswrapper[4750]: E0214 15:06:58.753404 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:07:06 crc kubenswrapper[4750]: E0214 15:07:06.757763 4750 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 14 15:07:06 crc kubenswrapper[4750]: E0214 15:07:06.762945 4750 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plfb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(e4d753a9-5bca-4940-9aa9-72a57f4f32a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 15:07:06 crc kubenswrapper[4750]: E0214 15:07:06.765132 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="e4d753a9-5bca-4940-9aa9-72a57f4f32a2" Feb 14 15:07:07 crc kubenswrapper[4750]: E0214 15:07:07.186083 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="e4d753a9-5bca-4940-9aa9-72a57f4f32a2" Feb 14 15:07:07 crc kubenswrapper[4750]: I0214 15:07:07.678946 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4v8dn"] Feb 14 15:07:08 crc kubenswrapper[4750]: I0214 15:07:08.197886 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v8dn" event={"ID":"1ffb2112-a324-4487-840a-4036723f8042","Type":"ContainerStarted","Data":"0b6be3283a4e3c6ec6a439c35c62842925a84ac35b86acddeef70d6a6381a4b8"} Feb 14 15:07:09 crc kubenswrapper[4750]: I0214 15:07:09.210632 4750 generic.go:334] "Generic (PLEG): container finished" podID="1ffb2112-a324-4487-840a-4036723f8042" containerID="799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb" exitCode=0 Feb 14 15:07:09 crc kubenswrapper[4750]: I0214 15:07:09.210753 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v8dn" event={"ID":"1ffb2112-a324-4487-840a-4036723f8042","Type":"ContainerDied","Data":"799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb"} Feb 14 15:07:09 crc kubenswrapper[4750]: I0214 15:07:09.743053 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:07:09 crc kubenswrapper[4750]: E0214 15:07:09.744045 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:07:10 crc kubenswrapper[4750]: I0214 15:07:10.226870 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v8dn" event={"ID":"1ffb2112-a324-4487-840a-4036723f8042","Type":"ContainerStarted","Data":"45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8"} Feb 14 15:07:12 crc kubenswrapper[4750]: I0214 15:07:12.254427 4750 generic.go:334] "Generic (PLEG): container finished" podID="1ffb2112-a324-4487-840a-4036723f8042" containerID="45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8" exitCode=0 Feb 14 15:07:12 crc kubenswrapper[4750]: I0214 15:07:12.254504 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v8dn" event={"ID":"1ffb2112-a324-4487-840a-4036723f8042","Type":"ContainerDied","Data":"45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8"} Feb 14 15:07:13 crc kubenswrapper[4750]: I0214 15:07:13.267341 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v8dn" event={"ID":"1ffb2112-a324-4487-840a-4036723f8042","Type":"ContainerStarted","Data":"32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c"} Feb 14 15:07:13 crc kubenswrapper[4750]: I0214 15:07:13.302260 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4v8dn" podStartSLOduration=16.582622131 podStartE2EDuration="20.302240285s" podCreationTimestamp="2026-02-14 15:06:53 +0000 UTC" firstStartedPulling="2026-02-14 15:07:09.213201829 +0000 UTC m=+4501.239191300" lastFinishedPulling="2026-02-14 15:07:12.932819973 +0000 UTC m=+4504.958809454" observedRunningTime="2026-02-14 15:07:13.291515951 +0000 UTC m=+4505.317505452" watchObservedRunningTime="2026-02-14 15:07:13.302240285 +0000 UTC m=+4505.328229766" Feb 14 15:07:14 crc kubenswrapper[4750]: I0214 15:07:14.047069 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:07:14 crc kubenswrapper[4750]: I0214 15:07:14.047489 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:07:15 crc kubenswrapper[4750]: I0214 15:07:15.104035 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4v8dn" podUID="1ffb2112-a324-4487-840a-4036723f8042" containerName="registry-server" probeResult="failure" output=< Feb 14 15:07:15 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:07:15 crc kubenswrapper[4750]: > Feb 14 15:07:15 crc kubenswrapper[4750]: E0214 15:07:15.235824 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:07:21 crc kubenswrapper[4750]: E0214 15:07:21.607386 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:07:21 crc kubenswrapper[4750]: I0214 15:07:21.742111 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:07:21 crc kubenswrapper[4750]: E0214 15:07:21.742705 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:07:22 crc kubenswrapper[4750]: I0214 15:07:22.379670 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 14 15:07:24 crc kubenswrapper[4750]: I0214 15:07:24.146860 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:07:24 crc kubenswrapper[4750]: I0214 15:07:24.202978 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:07:24 crc kubenswrapper[4750]: I0214 15:07:24.431102 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e4d753a9-5bca-4940-9aa9-72a57f4f32a2","Type":"ContainerStarted","Data":"f96e306a1f1fb780d83613f69bfb43c6c97ce1343c341df89356163b83284a2d"} Feb 14 15:07:24 crc kubenswrapper[4750]: I0214 15:07:24.449103 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.291985905 podStartE2EDuration="50.449082263s" podCreationTimestamp="2026-02-14 15:06:34 +0000 UTC" firstStartedPulling="2026-02-14 15:06:36.219217167 +0000 UTC m=+4468.245206688" lastFinishedPulling="2026-02-14 15:07:22.376313555 +0000 UTC m=+4514.402303046" observedRunningTime="2026-02-14 15:07:24.447484038 +0000 UTC m=+4516.473473539" watchObservedRunningTime="2026-02-14 15:07:24.449082263 +0000 UTC m=+4516.475071744" Feb 14 15:07:24 crc kubenswrapper[4750]: I0214 15:07:24.878730 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4v8dn"] Feb 14 15:07:25 crc kubenswrapper[4750]: E0214 15:07:25.287975 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:07:25 crc kubenswrapper[4750]: I0214 15:07:25.443798 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4v8dn" podUID="1ffb2112-a324-4487-840a-4036723f8042" containerName="registry-server" containerID="cri-o://32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c" gracePeriod=2 Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.030273 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.148770 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjb2c\" (UniqueName: \"kubernetes.io/projected/1ffb2112-a324-4487-840a-4036723f8042-kube-api-access-qjb2c\") pod \"1ffb2112-a324-4487-840a-4036723f8042\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.149269 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-utilities\") pod \"1ffb2112-a324-4487-840a-4036723f8042\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.149416 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-catalog-content\") pod \"1ffb2112-a324-4487-840a-4036723f8042\" (UID: \"1ffb2112-a324-4487-840a-4036723f8042\") " Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.150103 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-utilities" (OuterVolumeSpecName: "utilities") pod "1ffb2112-a324-4487-840a-4036723f8042" (UID: "1ffb2112-a324-4487-840a-4036723f8042"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.150475 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.157510 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ffb2112-a324-4487-840a-4036723f8042-kube-api-access-qjb2c" (OuterVolumeSpecName: "kube-api-access-qjb2c") pod "1ffb2112-a324-4487-840a-4036723f8042" (UID: "1ffb2112-a324-4487-840a-4036723f8042"). InnerVolumeSpecName "kube-api-access-qjb2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.220532 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ffb2112-a324-4487-840a-4036723f8042" (UID: "1ffb2112-a324-4487-840a-4036723f8042"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.255244 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjb2c\" (UniqueName: \"kubernetes.io/projected/1ffb2112-a324-4487-840a-4036723f8042-kube-api-access-qjb2c\") on node \"crc\" DevicePath \"\"" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.255336 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffb2112-a324-4487-840a-4036723f8042-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.459411 4750 generic.go:334] "Generic (PLEG): container finished" podID="1ffb2112-a324-4487-840a-4036723f8042" containerID="32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c" exitCode=0 Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.459512 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v8dn" event={"ID":"1ffb2112-a324-4487-840a-4036723f8042","Type":"ContainerDied","Data":"32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c"} Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.459815 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v8dn" event={"ID":"1ffb2112-a324-4487-840a-4036723f8042","Type":"ContainerDied","Data":"0b6be3283a4e3c6ec6a439c35c62842925a84ac35b86acddeef70d6a6381a4b8"} Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.459849 4750 scope.go:117] "RemoveContainer" containerID="32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.459550 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4v8dn" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.514890 4750 scope.go:117] "RemoveContainer" containerID="45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.535090 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4v8dn"] Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.553349 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4v8dn"] Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.556648 4750 scope.go:117] "RemoveContainer" containerID="799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.618608 4750 scope.go:117] "RemoveContainer" containerID="32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c" Feb 14 15:07:26 crc kubenswrapper[4750]: E0214 15:07:26.619056 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c\": container with ID starting with 32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c not found: ID does not exist" containerID="32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.619174 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c"} err="failed to get container status \"32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c\": rpc error: code = NotFound desc = could not find container \"32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c\": container with ID starting with 32b3fe08842053accc95161c6ec643d79df4e0a150eeb1d5de039dc272d68c3c not found: ID does not exist" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.619201 4750 scope.go:117] "RemoveContainer" containerID="45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8" Feb 14 15:07:26 crc kubenswrapper[4750]: E0214 15:07:26.619670 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8\": container with ID starting with 45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8 not found: ID does not exist" containerID="45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.619700 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8"} err="failed to get container status \"45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8\": rpc error: code = NotFound desc = could not find container \"45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8\": container with ID starting with 45d71bd92808087105dd40bf69be28930b5038d0c2aa1d438398f85daf1e1bd8 not found: ID does not exist" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.619757 4750 scope.go:117] "RemoveContainer" containerID="799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb" Feb 14 15:07:26 crc kubenswrapper[4750]: E0214 15:07:26.620018 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb\": container with ID starting with 799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb not found: ID does not exist" containerID="799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.620046 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb"} err="failed to get container status \"799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb\": rpc error: code = NotFound desc = could not find container \"799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb\": container with ID starting with 799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb not found: ID does not exist" Feb 14 15:07:26 crc kubenswrapper[4750]: I0214 15:07:26.756127 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ffb2112-a324-4487-840a-4036723f8042" path="/var/lib/kubelet/pods/1ffb2112-a324-4487-840a-4036723f8042/volumes" Feb 14 15:07:32 crc kubenswrapper[4750]: I0214 15:07:32.742493 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:07:32 crc kubenswrapper[4750]: E0214 15:07:32.743481 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:07:35 crc kubenswrapper[4750]: E0214 15:07:35.602856 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:07:36 crc kubenswrapper[4750]: E0214 15:07:36.338330 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:07:45 crc kubenswrapper[4750]: I0214 15:07:45.742289 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:07:45 crc kubenswrapper[4750]: E0214 15:07:45.743352 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:07:45 crc kubenswrapper[4750]: E0214 15:07:45.907550 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:07:48 crc kubenswrapper[4750]: E0214 15:07:48.108741 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:07:48 crc kubenswrapper[4750]: E0214 15:07:48.110924 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:07:51 crc kubenswrapper[4750]: E0214 15:07:51.684033 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:07:55 crc kubenswrapper[4750]: E0214 15:07:55.959460 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:08:00 crc kubenswrapper[4750]: I0214 15:08:00.743751 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:08:00 crc kubenswrapper[4750]: E0214 15:08:00.744717 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:08:06 crc kubenswrapper[4750]: E0214 15:08:06.211719 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:08:06 crc kubenswrapper[4750]: E0214 15:08:06.340335 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-conmon-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffb2112_a324_4487_840a_4036723f8042.slice/crio-799b327de86858d0bf62ac6195b88ff098e822b62be553e09b4b42f8ea3f81bb.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:08:08 crc kubenswrapper[4750]: E0214 15:08:08.794308 4750 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1b1def8ce2cc829fc96f58780ecc663c37dc0218588101cc5a789f4f5618c576/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1b1def8ce2cc829fc96f58780ecc663c37dc0218588101cc5a789f4f5618c576/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-marketplace_certified-operators-4v8dn_1ffb2112-a324-4487-840a-4036723f8042/extract-utilities/0.log" to get inode usage: stat /var/log/pods/openshift-marketplace_certified-operators-4v8dn_1ffb2112-a324-4487-840a-4036723f8042/extract-utilities/0.log: no such file or directory Feb 14 15:08:14 crc kubenswrapper[4750]: I0214 15:08:14.743158 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:08:14 crc kubenswrapper[4750]: E0214 15:08:14.743858 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:08:28 crc kubenswrapper[4750]: I0214 15:08:28.758761 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:08:28 crc kubenswrapper[4750]: E0214 15:08:28.759949 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:08:42 crc kubenswrapper[4750]: I0214 15:08:42.744711 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:08:42 crc kubenswrapper[4750]: E0214 15:08:42.747747 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:08:54 crc kubenswrapper[4750]: I0214 15:08:54.743424 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:08:54 crc kubenswrapper[4750]: E0214 15:08:54.744253 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:09:08 crc kubenswrapper[4750]: I0214 15:09:08.744830 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:09:08 crc kubenswrapper[4750]: E0214 15:09:08.749316 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:09:21 crc kubenswrapper[4750]: I0214 15:09:21.743675 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:09:21 crc kubenswrapper[4750]: E0214 15:09:21.745137 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:09:36 crc kubenswrapper[4750]: I0214 15:09:36.742529 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:09:39 crc kubenswrapper[4750]: I0214 15:09:39.049884 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"bd4542899d6784aa8c6f698a2dc2f74fedb4cb2ec294ab80539b7d7f070866bc"} Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.426204 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5trcw"] Feb 14 15:09:40 crc kubenswrapper[4750]: E0214 15:09:40.432032 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffb2112-a324-4487-840a-4036723f8042" containerName="extract-content" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.433959 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffb2112-a324-4487-840a-4036723f8042" containerName="extract-content" Feb 14 15:09:40 crc kubenswrapper[4750]: E0214 15:09:40.434314 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffb2112-a324-4487-840a-4036723f8042" containerName="registry-server" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.434327 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffb2112-a324-4487-840a-4036723f8042" containerName="registry-server" Feb 14 15:09:40 crc kubenswrapper[4750]: E0214 15:09:40.434379 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffb2112-a324-4487-840a-4036723f8042" containerName="extract-utilities" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.434391 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffb2112-a324-4487-840a-4036723f8042" containerName="extract-utilities" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.438724 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ffb2112-a324-4487-840a-4036723f8042" containerName="registry-server" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.452887 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.545489 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5trcw"] Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.551563 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dl7\" (UniqueName: \"kubernetes.io/projected/33866c53-bae0-43ce-8b96-56e81d1678bd-kube-api-access-92dl7\") pod \"redhat-operators-5trcw\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.551829 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-catalog-content\") pod \"redhat-operators-5trcw\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.552072 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-utilities\") pod \"redhat-operators-5trcw\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.655463 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92dl7\" (UniqueName: \"kubernetes.io/projected/33866c53-bae0-43ce-8b96-56e81d1678bd-kube-api-access-92dl7\") pod \"redhat-operators-5trcw\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.655860 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-catalog-content\") pod \"redhat-operators-5trcw\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.655928 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-utilities\") pod \"redhat-operators-5trcw\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.664092 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-catalog-content\") pod \"redhat-operators-5trcw\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.664095 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-utilities\") pod \"redhat-operators-5trcw\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.688650 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92dl7\" (UniqueName: \"kubernetes.io/projected/33866c53-bae0-43ce-8b96-56e81d1678bd-kube-api-access-92dl7\") pod \"redhat-operators-5trcw\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:40 crc kubenswrapper[4750]: I0214 15:09:40.804771 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:09:43 crc kubenswrapper[4750]: I0214 15:09:43.876633 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5trcw"] Feb 14 15:09:43 crc kubenswrapper[4750]: W0214 15:09:43.935720 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33866c53_bae0_43ce_8b96_56e81d1678bd.slice/crio-4574416fc445ef796b746a7137f7fda85f04b601f5e4e7f0058dc9c883ee7cc4 WatchSource:0}: Error finding container 4574416fc445ef796b746a7137f7fda85f04b601f5e4e7f0058dc9c883ee7cc4: Status 404 returned error can't find the container with id 4574416fc445ef796b746a7137f7fda85f04b601f5e4e7f0058dc9c883ee7cc4 Feb 14 15:09:44 crc kubenswrapper[4750]: I0214 15:09:44.106591 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5trcw" event={"ID":"33866c53-bae0-43ce-8b96-56e81d1678bd","Type":"ContainerStarted","Data":"4574416fc445ef796b746a7137f7fda85f04b601f5e4e7f0058dc9c883ee7cc4"} Feb 14 15:09:45 crc kubenswrapper[4750]: I0214 15:09:45.119162 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5trcw" event={"ID":"33866c53-bae0-43ce-8b96-56e81d1678bd","Type":"ContainerDied","Data":"457aaec497247f09adda2e981a124330ef8b560ea0f8adcf5995c8486f9d2ecd"} Feb 14 15:09:45 crc kubenswrapper[4750]: I0214 15:09:45.120726 4750 generic.go:334] "Generic (PLEG): container finished" podID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerID="457aaec497247f09adda2e981a124330ef8b560ea0f8adcf5995c8486f9d2ecd" exitCode=0 Feb 14 15:09:45 crc kubenswrapper[4750]: I0214 15:09:45.129136 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 15:09:47 crc kubenswrapper[4750]: I0214 15:09:47.203161 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5trcw" event={"ID":"33866c53-bae0-43ce-8b96-56e81d1678bd","Type":"ContainerStarted","Data":"91dd97fff4d8487114425bc1230dc2b1f45167dcc4286ecc376202f64c351396"} Feb 14 15:09:54 crc kubenswrapper[4750]: I0214 15:09:54.283091 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5trcw" event={"ID":"33866c53-bae0-43ce-8b96-56e81d1678bd","Type":"ContainerDied","Data":"91dd97fff4d8487114425bc1230dc2b1f45167dcc4286ecc376202f64c351396"} Feb 14 15:09:54 crc kubenswrapper[4750]: I0214 15:09:54.283797 4750 generic.go:334] "Generic (PLEG): container finished" podID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerID="91dd97fff4d8487114425bc1230dc2b1f45167dcc4286ecc376202f64c351396" exitCode=0 Feb 14 15:09:56 crc kubenswrapper[4750]: I0214 15:09:56.320580 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5trcw" event={"ID":"33866c53-bae0-43ce-8b96-56e81d1678bd","Type":"ContainerStarted","Data":"e9a8bcf32216c615868b6073c44f3411ec9a2891f0c26c0720fe9a0e7337a9d0"} Feb 14 15:09:56 crc kubenswrapper[4750]: I0214 15:09:56.369839 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5trcw" podStartSLOduration=6.675289543 podStartE2EDuration="16.360733026s" podCreationTimestamp="2026-02-14 15:09:40 +0000 UTC" firstStartedPulling="2026-02-14 15:09:45.123108172 +0000 UTC m=+4657.149097653" lastFinishedPulling="2026-02-14 15:09:54.808551655 +0000 UTC m=+4666.834541136" observedRunningTime="2026-02-14 15:09:56.347778538 +0000 UTC m=+4668.373768029" watchObservedRunningTime="2026-02-14 15:09:56.360733026 +0000 UTC m=+4668.386722507" Feb 14 15:10:00 crc kubenswrapper[4750]: I0214 15:10:00.805233 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:10:00 crc kubenswrapper[4750]: I0214 15:10:00.805529 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:10:01 crc kubenswrapper[4750]: I0214 15:10:01.886626 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5trcw" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" probeResult="failure" output=< Feb 14 15:10:01 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:10:01 crc kubenswrapper[4750]: > Feb 14 15:10:11 crc kubenswrapper[4750]: I0214 15:10:11.893707 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5trcw" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" probeResult="failure" output=< Feb 14 15:10:11 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:10:11 crc kubenswrapper[4750]: > Feb 14 15:10:21 crc kubenswrapper[4750]: I0214 15:10:21.961320 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5trcw" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" probeResult="failure" output=< Feb 14 15:10:21 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:10:21 crc kubenswrapper[4750]: > Feb 14 15:10:31 crc kubenswrapper[4750]: I0214 15:10:31.859013 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5trcw" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" probeResult="failure" output=< Feb 14 15:10:31 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:10:31 crc kubenswrapper[4750]: > Feb 14 15:10:41 crc kubenswrapper[4750]: I0214 15:10:41.871792 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5trcw" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" probeResult="failure" output=< Feb 14 15:10:41 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:10:41 crc kubenswrapper[4750]: > Feb 14 15:10:51 crc kubenswrapper[4750]: I0214 15:10:51.862290 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5trcw" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" probeResult="failure" output=< Feb 14 15:10:51 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:10:51 crc kubenswrapper[4750]: > Feb 14 15:11:00 crc kubenswrapper[4750]: I0214 15:11:00.882261 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:11:00 crc kubenswrapper[4750]: I0214 15:11:00.934567 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:11:01 crc kubenswrapper[4750]: I0214 15:11:01.081857 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5trcw"] Feb 14 15:11:02 crc kubenswrapper[4750]: I0214 15:11:02.061980 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5trcw" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" containerID="cri-o://e9a8bcf32216c615868b6073c44f3411ec9a2891f0c26c0720fe9a0e7337a9d0" gracePeriod=2 Feb 14 15:11:03 crc kubenswrapper[4750]: I0214 15:11:03.073449 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5trcw" event={"ID":"33866c53-bae0-43ce-8b96-56e81d1678bd","Type":"ContainerDied","Data":"e9a8bcf32216c615868b6073c44f3411ec9a2891f0c26c0720fe9a0e7337a9d0"} Feb 14 15:11:03 crc kubenswrapper[4750]: I0214 15:11:03.073307 4750 generic.go:334] "Generic (PLEG): container finished" podID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerID="e9a8bcf32216c615868b6073c44f3411ec9a2891f0c26c0720fe9a0e7337a9d0" exitCode=0 Feb 14 15:11:03 crc kubenswrapper[4750]: I0214 15:11:03.937824 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.065852 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92dl7\" (UniqueName: \"kubernetes.io/projected/33866c53-bae0-43ce-8b96-56e81d1678bd-kube-api-access-92dl7\") pod \"33866c53-bae0-43ce-8b96-56e81d1678bd\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.066139 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-utilities\") pod \"33866c53-bae0-43ce-8b96-56e81d1678bd\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.066282 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-catalog-content\") pod \"33866c53-bae0-43ce-8b96-56e81d1678bd\" (UID: \"33866c53-bae0-43ce-8b96-56e81d1678bd\") " Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.071216 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-utilities" (OuterVolumeSpecName: "utilities") pod "33866c53-bae0-43ce-8b96-56e81d1678bd" (UID: "33866c53-bae0-43ce-8b96-56e81d1678bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.088966 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5trcw" event={"ID":"33866c53-bae0-43ce-8b96-56e81d1678bd","Type":"ContainerDied","Data":"4574416fc445ef796b746a7137f7fda85f04b601f5e4e7f0058dc9c883ee7cc4"} Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.089040 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5trcw" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.091015 4750 scope.go:117] "RemoveContainer" containerID="e9a8bcf32216c615868b6073c44f3411ec9a2891f0c26c0720fe9a0e7337a9d0" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.096442 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33866c53-bae0-43ce-8b96-56e81d1678bd-kube-api-access-92dl7" (OuterVolumeSpecName: "kube-api-access-92dl7") pod "33866c53-bae0-43ce-8b96-56e81d1678bd" (UID: "33866c53-bae0-43ce-8b96-56e81d1678bd"). InnerVolumeSpecName "kube-api-access-92dl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.169907 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.169950 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92dl7\" (UniqueName: \"kubernetes.io/projected/33866c53-bae0-43ce-8b96-56e81d1678bd-kube-api-access-92dl7\") on node \"crc\" DevicePath \"\"" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.190634 4750 scope.go:117] "RemoveContainer" containerID="91dd97fff4d8487114425bc1230dc2b1f45167dcc4286ecc376202f64c351396" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.216764 4750 scope.go:117] "RemoveContainer" containerID="457aaec497247f09adda2e981a124330ef8b560ea0f8adcf5995c8486f9d2ecd" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.221760 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33866c53-bae0-43ce-8b96-56e81d1678bd" (UID: "33866c53-bae0-43ce-8b96-56e81d1678bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.272135 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33866c53-bae0-43ce-8b96-56e81d1678bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.438500 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5trcw"] Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.448071 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5trcw"] Feb 14 15:11:04 crc kubenswrapper[4750]: I0214 15:11:04.764762 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" path="/var/lib/kubelet/pods/33866c53-bae0-43ce-8b96-56e81d1678bd/volumes" Feb 14 15:12:00 crc kubenswrapper[4750]: I0214 15:12:00.129418 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:12:00 crc kubenswrapper[4750]: I0214 15:12:00.130174 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:12:30 crc kubenswrapper[4750]: I0214 15:12:30.129427 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:12:30 crc kubenswrapper[4750]: I0214 15:12:30.130059 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:13:00 crc kubenswrapper[4750]: I0214 15:13:00.129151 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:13:00 crc kubenswrapper[4750]: I0214 15:13:00.129730 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:13:00 crc kubenswrapper[4750]: I0214 15:13:00.129775 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 15:13:00 crc kubenswrapper[4750]: I0214 15:13:00.137346 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd4542899d6784aa8c6f698a2dc2f74fedb4cb2ec294ab80539b7d7f070866bc"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 15:13:00 crc kubenswrapper[4750]: I0214 15:13:00.138980 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://bd4542899d6784aa8c6f698a2dc2f74fedb4cb2ec294ab80539b7d7f070866bc" gracePeriod=600 Feb 14 15:13:00 crc kubenswrapper[4750]: I0214 15:13:00.750271 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="bd4542899d6784aa8c6f698a2dc2f74fedb4cb2ec294ab80539b7d7f070866bc" exitCode=0 Feb 14 15:13:00 crc kubenswrapper[4750]: I0214 15:13:00.759736 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"bd4542899d6784aa8c6f698a2dc2f74fedb4cb2ec294ab80539b7d7f070866bc"} Feb 14 15:13:00 crc kubenswrapper[4750]: I0214 15:13:00.759863 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524"} Feb 14 15:13:00 crc kubenswrapper[4750]: I0214 15:13:00.759913 4750 scope.go:117] "RemoveContainer" containerID="f3760a2e2429f16013de7548f8689c41ce848bd171de44349342cf3af03be8af" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.043761 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v2xpx"] Feb 14 15:14:09 crc kubenswrapper[4750]: E0214 15:14:09.046666 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="extract-content" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.047808 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="extract-content" Feb 14 15:14:09 crc kubenswrapper[4750]: E0214 15:14:09.047859 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.047870 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" Feb 14 15:14:09 crc kubenswrapper[4750]: E0214 15:14:09.047891 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="extract-utilities" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.047899 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="extract-utilities" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.049357 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="33866c53-bae0-43ce-8b96-56e81d1678bd" containerName="registry-server" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.056114 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.087244 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z67s\" (UniqueName: \"kubernetes.io/projected/f732c4ec-27ed-4c72-9419-91ecb8c5c286-kube-api-access-9z67s\") pod \"redhat-marketplace-v2xpx\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.087551 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-utilities\") pod \"redhat-marketplace-v2xpx\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.087725 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-catalog-content\") pod \"redhat-marketplace-v2xpx\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.126794 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2xpx"] Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.189363 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z67s\" (UniqueName: \"kubernetes.io/projected/f732c4ec-27ed-4c72-9419-91ecb8c5c286-kube-api-access-9z67s\") pod \"redhat-marketplace-v2xpx\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.189445 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-utilities\") pod \"redhat-marketplace-v2xpx\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.189500 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-catalog-content\") pod \"redhat-marketplace-v2xpx\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.191366 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-catalog-content\") pod \"redhat-marketplace-v2xpx\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.191491 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-utilities\") pod \"redhat-marketplace-v2xpx\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.213487 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z67s\" (UniqueName: \"kubernetes.io/projected/f732c4ec-27ed-4c72-9419-91ecb8c5c286-kube-api-access-9z67s\") pod \"redhat-marketplace-v2xpx\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:09 crc kubenswrapper[4750]: I0214 15:14:09.386132 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:10 crc kubenswrapper[4750]: I0214 15:14:10.127869 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2xpx"] Feb 14 15:14:10 crc kubenswrapper[4750]: I0214 15:14:10.596504 4750 generic.go:334] "Generic (PLEG): container finished" podID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerID="6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea" exitCode=0 Feb 14 15:14:10 crc kubenswrapper[4750]: I0214 15:14:10.596612 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2xpx" event={"ID":"f732c4ec-27ed-4c72-9419-91ecb8c5c286","Type":"ContainerDied","Data":"6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea"} Feb 14 15:14:10 crc kubenswrapper[4750]: I0214 15:14:10.596776 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2xpx" event={"ID":"f732c4ec-27ed-4c72-9419-91ecb8c5c286","Type":"ContainerStarted","Data":"0f9ae996549e34ed3b81538712e0ffe77ab6f16e45341941739d32b022b64ad6"} Feb 14 15:14:11 crc kubenswrapper[4750]: I0214 15:14:11.611677 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2xpx" event={"ID":"f732c4ec-27ed-4c72-9419-91ecb8c5c286","Type":"ContainerStarted","Data":"dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a"} Feb 14 15:14:12 crc kubenswrapper[4750]: I0214 15:14:12.626807 4750 generic.go:334] "Generic (PLEG): container finished" podID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerID="dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a" exitCode=0 Feb 14 15:14:12 crc kubenswrapper[4750]: I0214 15:14:12.626870 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2xpx" event={"ID":"f732c4ec-27ed-4c72-9419-91ecb8c5c286","Type":"ContainerDied","Data":"dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a"} Feb 14 15:14:13 crc kubenswrapper[4750]: I0214 15:14:13.638644 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2xpx" event={"ID":"f732c4ec-27ed-4c72-9419-91ecb8c5c286","Type":"ContainerStarted","Data":"1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361"} Feb 14 15:14:13 crc kubenswrapper[4750]: I0214 15:14:13.662060 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v2xpx" podStartSLOduration=3.251994427 podStartE2EDuration="5.660813312s" podCreationTimestamp="2026-02-14 15:14:08 +0000 UTC" firstStartedPulling="2026-02-14 15:14:10.600445147 +0000 UTC m=+4922.626434648" lastFinishedPulling="2026-02-14 15:14:13.009264052 +0000 UTC m=+4925.035253533" observedRunningTime="2026-02-14 15:14:13.653251358 +0000 UTC m=+4925.679240859" watchObservedRunningTime="2026-02-14 15:14:13.660813312 +0000 UTC m=+4925.686802803" Feb 14 15:14:19 crc kubenswrapper[4750]: I0214 15:14:19.387620 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:19 crc kubenswrapper[4750]: I0214 15:14:19.388599 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:19 crc kubenswrapper[4750]: I0214 15:14:19.483042 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:19 crc kubenswrapper[4750]: I0214 15:14:19.750786 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:19 crc kubenswrapper[4750]: I0214 15:14:19.810385 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2xpx"] Feb 14 15:14:21 crc kubenswrapper[4750]: I0214 15:14:21.732749 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v2xpx" podUID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerName="registry-server" containerID="cri-o://1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361" gracePeriod=2 Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.409905 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.433768 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z67s\" (UniqueName: \"kubernetes.io/projected/f732c4ec-27ed-4c72-9419-91ecb8c5c286-kube-api-access-9z67s\") pod \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.446144 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f732c4ec-27ed-4c72-9419-91ecb8c5c286-kube-api-access-9z67s" (OuterVolumeSpecName: "kube-api-access-9z67s") pod "f732c4ec-27ed-4c72-9419-91ecb8c5c286" (UID: "f732c4ec-27ed-4c72-9419-91ecb8c5c286"). InnerVolumeSpecName "kube-api-access-9z67s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.537198 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-catalog-content\") pod \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.537416 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-utilities\") pod \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\" (UID: \"f732c4ec-27ed-4c72-9419-91ecb8c5c286\") " Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.538303 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z67s\" (UniqueName: \"kubernetes.io/projected/f732c4ec-27ed-4c72-9419-91ecb8c5c286-kube-api-access-9z67s\") on node \"crc\" DevicePath \"\"" Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.538911 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-utilities" (OuterVolumeSpecName: "utilities") pod "f732c4ec-27ed-4c72-9419-91ecb8c5c286" (UID: "f732c4ec-27ed-4c72-9419-91ecb8c5c286"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.559741 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f732c4ec-27ed-4c72-9419-91ecb8c5c286" (UID: "f732c4ec-27ed-4c72-9419-91ecb8c5c286"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.642022 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.642067 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f732c4ec-27ed-4c72-9419-91ecb8c5c286-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.748581 4750 generic.go:334] "Generic (PLEG): container finished" podID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerID="1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361" exitCode=0 Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.748705 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2xpx" Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.770953 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2xpx" event={"ID":"f732c4ec-27ed-4c72-9419-91ecb8c5c286","Type":"ContainerDied","Data":"1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361"} Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.771297 4750 scope.go:117] "RemoveContainer" containerID="1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361" Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.771028 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2xpx" event={"ID":"f732c4ec-27ed-4c72-9419-91ecb8c5c286","Type":"ContainerDied","Data":"0f9ae996549e34ed3b81538712e0ffe77ab6f16e45341941739d32b022b64ad6"} Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.805294 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2xpx"] Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.815474 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2xpx"] Feb 14 15:14:22 crc kubenswrapper[4750]: I0214 15:14:22.819360 4750 scope.go:117] "RemoveContainer" containerID="dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a" Feb 14 15:14:24 crc kubenswrapper[4750]: I0214 15:14:24.125623 4750 scope.go:117] "RemoveContainer" containerID="6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea" Feb 14 15:14:24 crc kubenswrapper[4750]: I0214 15:14:24.154220 4750 scope.go:117] "RemoveContainer" containerID="1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361" Feb 14 15:14:24 crc kubenswrapper[4750]: E0214 15:14:24.156527 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361\": container with ID starting with 1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361 not found: ID does not exist" containerID="1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361" Feb 14 15:14:24 crc kubenswrapper[4750]: I0214 15:14:24.156604 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361"} err="failed to get container status \"1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361\": rpc error: code = NotFound desc = could not find container \"1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361\": container with ID starting with 1153506f0ac3539e55a4b43ca3f6f3ab31bcc13b1e82236ad599042da0ee7361 not found: ID does not exist" Feb 14 15:14:24 crc kubenswrapper[4750]: I0214 15:14:24.156640 4750 scope.go:117] "RemoveContainer" containerID="dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a" Feb 14 15:14:24 crc kubenswrapper[4750]: E0214 15:14:24.157165 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a\": container with ID starting with dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a not found: ID does not exist" containerID="dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a" Feb 14 15:14:24 crc kubenswrapper[4750]: I0214 15:14:24.157236 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a"} err="failed to get container status \"dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a\": rpc error: code = NotFound desc = could not find container \"dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a\": container with ID starting with dafd5af0c1387d40398bf81ec17a084e8ef160590f3280236ffa0a2d2409da6a not found: ID does not exist" Feb 14 15:14:24 crc kubenswrapper[4750]: I0214 15:14:24.157294 4750 scope.go:117] "RemoveContainer" containerID="6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea" Feb 14 15:14:24 crc kubenswrapper[4750]: E0214 15:14:24.157824 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea\": container with ID starting with 6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea not found: ID does not exist" containerID="6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea" Feb 14 15:14:24 crc kubenswrapper[4750]: I0214 15:14:24.157856 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea"} err="failed to get container status \"6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea\": rpc error: code = NotFound desc = could not find container \"6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea\": container with ID starting with 6157852802a33468ac29f9e116e4cc7a7ea4d3515931243644f7c8bfd0ad18ea not found: ID does not exist" Feb 14 15:14:24 crc kubenswrapper[4750]: I0214 15:14:24.760100 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" path="/var/lib/kubelet/pods/f732c4ec-27ed-4c72-9419-91ecb8c5c286/volumes" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.129289 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.130542 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.194074 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4"] Feb 14 15:15:00 crc kubenswrapper[4750]: E0214 15:15:00.195061 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerName="extract-content" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.195093 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerName="extract-content" Feb 14 15:15:00 crc kubenswrapper[4750]: E0214 15:15:00.195240 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerName="extract-utilities" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.195264 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerName="extract-utilities" Feb 14 15:15:00 crc kubenswrapper[4750]: E0214 15:15:00.195282 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerName="registry-server" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.195293 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerName="registry-server" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.195889 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f732c4ec-27ed-4c72-9419-91ecb8c5c286" containerName="registry-server" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.196959 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.207426 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4"] Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.209516 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.212515 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.264596 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcpg5\" (UniqueName: \"kubernetes.io/projected/5f792132-0605-4281-8131-9961a1a7485b-kube-api-access-fcpg5\") pod \"collect-profiles-29518035-b96k4\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.264770 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f792132-0605-4281-8131-9961a1a7485b-config-volume\") pod \"collect-profiles-29518035-b96k4\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.264788 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f792132-0605-4281-8131-9961a1a7485b-secret-volume\") pod \"collect-profiles-29518035-b96k4\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.367303 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcpg5\" (UniqueName: \"kubernetes.io/projected/5f792132-0605-4281-8131-9961a1a7485b-kube-api-access-fcpg5\") pod \"collect-profiles-29518035-b96k4\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.367607 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f792132-0605-4281-8131-9961a1a7485b-config-volume\") pod \"collect-profiles-29518035-b96k4\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.367660 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f792132-0605-4281-8131-9961a1a7485b-secret-volume\") pod \"collect-profiles-29518035-b96k4\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.368992 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f792132-0605-4281-8131-9961a1a7485b-config-volume\") pod \"collect-profiles-29518035-b96k4\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.375036 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f792132-0605-4281-8131-9961a1a7485b-secret-volume\") pod \"collect-profiles-29518035-b96k4\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.383881 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcpg5\" (UniqueName: \"kubernetes.io/projected/5f792132-0605-4281-8131-9961a1a7485b-kube-api-access-fcpg5\") pod \"collect-profiles-29518035-b96k4\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:00 crc kubenswrapper[4750]: I0214 15:15:00.529432 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:01 crc kubenswrapper[4750]: I0214 15:15:01.017613 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4"] Feb 14 15:15:01 crc kubenswrapper[4750]: I0214 15:15:01.164667 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" event={"ID":"5f792132-0605-4281-8131-9961a1a7485b","Type":"ContainerStarted","Data":"fbc5ac4e12c7875f64f3a4c26cac48176336a553e1663c90b76cf2da91f3a55f"} Feb 14 15:15:02 crc kubenswrapper[4750]: I0214 15:15:02.177164 4750 generic.go:334] "Generic (PLEG): container finished" podID="5f792132-0605-4281-8131-9961a1a7485b" containerID="d2f99cfee34bbd00f90e5695e8bcb7903303df4f1f3113106749e48fbbcd48ed" exitCode=0 Feb 14 15:15:02 crc kubenswrapper[4750]: I0214 15:15:02.177234 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" event={"ID":"5f792132-0605-4281-8131-9961a1a7485b","Type":"ContainerDied","Data":"d2f99cfee34bbd00f90e5695e8bcb7903303df4f1f3113106749e48fbbcd48ed"} Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.001399 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.066332 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcpg5\" (UniqueName: \"kubernetes.io/projected/5f792132-0605-4281-8131-9961a1a7485b-kube-api-access-fcpg5\") pod \"5f792132-0605-4281-8131-9961a1a7485b\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.066443 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f792132-0605-4281-8131-9961a1a7485b-secret-volume\") pod \"5f792132-0605-4281-8131-9961a1a7485b\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.066467 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f792132-0605-4281-8131-9961a1a7485b-config-volume\") pod \"5f792132-0605-4281-8131-9961a1a7485b\" (UID: \"5f792132-0605-4281-8131-9961a1a7485b\") " Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.069755 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f792132-0605-4281-8131-9961a1a7485b-config-volume" (OuterVolumeSpecName: "config-volume") pod "5f792132-0605-4281-8131-9961a1a7485b" (UID: "5f792132-0605-4281-8131-9961a1a7485b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.090889 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f792132-0605-4281-8131-9961a1a7485b-kube-api-access-fcpg5" (OuterVolumeSpecName: "kube-api-access-fcpg5") pod "5f792132-0605-4281-8131-9961a1a7485b" (UID: "5f792132-0605-4281-8131-9961a1a7485b"). InnerVolumeSpecName "kube-api-access-fcpg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.092179 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f792132-0605-4281-8131-9961a1a7485b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5f792132-0605-4281-8131-9961a1a7485b" (UID: "5f792132-0605-4281-8131-9961a1a7485b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.171898 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcpg5\" (UniqueName: \"kubernetes.io/projected/5f792132-0605-4281-8131-9961a1a7485b-kube-api-access-fcpg5\") on node \"crc\" DevicePath \"\"" Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.171937 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f792132-0605-4281-8131-9961a1a7485b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.171947 4750 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f792132-0605-4281-8131-9961a1a7485b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.201652 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" event={"ID":"5f792132-0605-4281-8131-9961a1a7485b","Type":"ContainerDied","Data":"fbc5ac4e12c7875f64f3a4c26cac48176336a553e1663c90b76cf2da91f3a55f"} Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.201695 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbc5ac4e12c7875f64f3a4c26cac48176336a553e1663c90b76cf2da91f3a55f" Feb 14 15:15:04 crc kubenswrapper[4750]: I0214 15:15:04.201749 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518035-b96k4" Feb 14 15:15:05 crc kubenswrapper[4750]: I0214 15:15:05.137066 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b"] Feb 14 15:15:05 crc kubenswrapper[4750]: I0214 15:15:05.151595 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517990-n9c7b"] Feb 14 15:15:06 crc kubenswrapper[4750]: I0214 15:15:06.765319 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213a52b1-d668-4878-a752-4254ae6e6534" path="/var/lib/kubelet/pods/213a52b1-d668-4878-a752-4254ae6e6534/volumes" Feb 14 15:15:29 crc kubenswrapper[4750]: I0214 15:15:29.215778 4750 scope.go:117] "RemoveContainer" containerID="61d6940c897cc02f885c3b9a548d36f2d4bffffee47ff13655a57ef59871511f" Feb 14 15:15:30 crc kubenswrapper[4750]: I0214 15:15:30.129072 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:15:30 crc kubenswrapper[4750]: I0214 15:15:30.129580 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:16:00 crc kubenswrapper[4750]: I0214 15:16:00.129312 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:16:00 crc kubenswrapper[4750]: I0214 15:16:00.129808 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:16:00 crc kubenswrapper[4750]: I0214 15:16:00.129848 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 15:16:00 crc kubenswrapper[4750]: I0214 15:16:00.130682 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 15:16:00 crc kubenswrapper[4750]: I0214 15:16:00.130733 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" gracePeriod=600 Feb 14 15:16:00 crc kubenswrapper[4750]: E0214 15:16:00.255565 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod581740c6_1f28_4471_8131_5d5042cc59f5.slice/crio-conmon-26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:16:00 crc kubenswrapper[4750]: E0214 15:16:00.271630 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:16:00 crc kubenswrapper[4750]: I0214 15:16:00.834378 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" exitCode=0 Feb 14 15:16:00 crc kubenswrapper[4750]: I0214 15:16:00.834420 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524"} Feb 14 15:16:00 crc kubenswrapper[4750]: I0214 15:16:00.834454 4750 scope.go:117] "RemoveContainer" containerID="bd4542899d6784aa8c6f698a2dc2f74fedb4cb2ec294ab80539b7d7f070866bc" Feb 14 15:16:00 crc kubenswrapper[4750]: I0214 15:16:00.835264 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:16:00 crc kubenswrapper[4750]: E0214 15:16:00.835608 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:16:15 crc kubenswrapper[4750]: I0214 15:16:15.742724 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:16:15 crc kubenswrapper[4750]: E0214 15:16:15.743644 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:16:26 crc kubenswrapper[4750]: I0214 15:16:26.742293 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:16:26 crc kubenswrapper[4750]: E0214 15:16:26.743287 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:16:41 crc kubenswrapper[4750]: I0214 15:16:41.742082 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:16:41 crc kubenswrapper[4750]: E0214 15:16:41.743011 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:16:56 crc kubenswrapper[4750]: I0214 15:16:56.742664 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:16:56 crc kubenswrapper[4750]: E0214 15:16:56.743969 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:17:09 crc kubenswrapper[4750]: I0214 15:17:09.742354 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:17:09 crc kubenswrapper[4750]: E0214 15:17:09.743480 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.037031 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d7bxt"] Feb 14 15:17:15 crc kubenswrapper[4750]: E0214 15:17:15.038478 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f792132-0605-4281-8131-9961a1a7485b" containerName="collect-profiles" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.038502 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f792132-0605-4281-8131-9961a1a7485b" containerName="collect-profiles" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.038928 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f792132-0605-4281-8131-9961a1a7485b" containerName="collect-profiles" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.041448 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.050303 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d7bxt"] Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.217297 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-utilities\") pod \"community-operators-d7bxt\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.217615 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-catalog-content\") pod \"community-operators-d7bxt\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.218022 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9txx\" (UniqueName: \"kubernetes.io/projected/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-kube-api-access-p9txx\") pod \"community-operators-d7bxt\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.320497 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9txx\" (UniqueName: \"kubernetes.io/projected/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-kube-api-access-p9txx\") pod \"community-operators-d7bxt\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.320625 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-utilities\") pod \"community-operators-d7bxt\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.320752 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-catalog-content\") pod \"community-operators-d7bxt\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.321137 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-utilities\") pod \"community-operators-d7bxt\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.321229 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-catalog-content\") pod \"community-operators-d7bxt\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.347105 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9txx\" (UniqueName: \"kubernetes.io/projected/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-kube-api-access-p9txx\") pod \"community-operators-d7bxt\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:15 crc kubenswrapper[4750]: I0214 15:17:15.375865 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:16 crc kubenswrapper[4750]: I0214 15:17:16.755676 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d7bxt"] Feb 14 15:17:16 crc kubenswrapper[4750]: I0214 15:17:16.928713 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7bxt" event={"ID":"5f3fa922-1752-437b-a6f4-0246c8f6cbbf","Type":"ContainerStarted","Data":"d381c5e5f1e5d8a16417ac4ae2eeaf884a24f6195304cdfb61299118f22d67a1"} Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.220536 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nq8gl"] Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.222792 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.233696 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nq8gl"] Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.377866 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-utilities\") pod \"certified-operators-nq8gl\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.378053 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnfh5\" (UniqueName: \"kubernetes.io/projected/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-kube-api-access-wnfh5\") pod \"certified-operators-nq8gl\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.378479 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-catalog-content\") pod \"certified-operators-nq8gl\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.480769 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-catalog-content\") pod \"certified-operators-nq8gl\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.480942 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-utilities\") pod \"certified-operators-nq8gl\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.481021 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnfh5\" (UniqueName: \"kubernetes.io/projected/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-kube-api-access-wnfh5\") pod \"certified-operators-nq8gl\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.481491 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-utilities\") pod \"certified-operators-nq8gl\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.481808 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-catalog-content\") pod \"certified-operators-nq8gl\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.507641 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnfh5\" (UniqueName: \"kubernetes.io/projected/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-kube-api-access-wnfh5\") pod \"certified-operators-nq8gl\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.545703 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.944591 4750 generic.go:334] "Generic (PLEG): container finished" podID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerID="002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd" exitCode=0 Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.944659 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7bxt" event={"ID":"5f3fa922-1752-437b-a6f4-0246c8f6cbbf","Type":"ContainerDied","Data":"002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd"} Feb 14 15:17:17 crc kubenswrapper[4750]: I0214 15:17:17.950365 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 15:17:18 crc kubenswrapper[4750]: I0214 15:17:18.737091 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nq8gl"] Feb 14 15:17:18 crc kubenswrapper[4750]: W0214 15:17:18.748582 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aa6ff4d_bfe7_44cf_8ddf_607c3b211868.slice/crio-b30fa49fc58e189199304c6dd28ede873fcb5ff7aaf17d6396197f7ca9ce05d9 WatchSource:0}: Error finding container b30fa49fc58e189199304c6dd28ede873fcb5ff7aaf17d6396197f7ca9ce05d9: Status 404 returned error can't find the container with id b30fa49fc58e189199304c6dd28ede873fcb5ff7aaf17d6396197f7ca9ce05d9 Feb 14 15:17:18 crc kubenswrapper[4750]: I0214 15:17:18.955627 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq8gl" event={"ID":"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868","Type":"ContainerStarted","Data":"b30fa49fc58e189199304c6dd28ede873fcb5ff7aaf17d6396197f7ca9ce05d9"} Feb 14 15:17:19 crc kubenswrapper[4750]: I0214 15:17:19.968448 4750 generic.go:334] "Generic (PLEG): container finished" podID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerID="6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af" exitCode=0 Feb 14 15:17:19 crc kubenswrapper[4750]: I0214 15:17:19.968532 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq8gl" event={"ID":"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868","Type":"ContainerDied","Data":"6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af"} Feb 14 15:17:19 crc kubenswrapper[4750]: I0214 15:17:19.971396 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7bxt" event={"ID":"5f3fa922-1752-437b-a6f4-0246c8f6cbbf","Type":"ContainerStarted","Data":"e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509"} Feb 14 15:17:20 crc kubenswrapper[4750]: I0214 15:17:20.985194 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq8gl" event={"ID":"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868","Type":"ContainerStarted","Data":"955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e"} Feb 14 15:17:20 crc kubenswrapper[4750]: I0214 15:17:20.987358 4750 generic.go:334] "Generic (PLEG): container finished" podID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerID="e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509" exitCode=0 Feb 14 15:17:20 crc kubenswrapper[4750]: I0214 15:17:20.987403 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7bxt" event={"ID":"5f3fa922-1752-437b-a6f4-0246c8f6cbbf","Type":"ContainerDied","Data":"e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509"} Feb 14 15:17:22 crc kubenswrapper[4750]: I0214 15:17:22.002964 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7bxt" event={"ID":"5f3fa922-1752-437b-a6f4-0246c8f6cbbf","Type":"ContainerStarted","Data":"10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8"} Feb 14 15:17:22 crc kubenswrapper[4750]: I0214 15:17:22.019493 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d7bxt" podStartSLOduration=3.585219088 podStartE2EDuration="7.019474481s" podCreationTimestamp="2026-02-14 15:17:15 +0000 UTC" firstStartedPulling="2026-02-14 15:17:17.949162167 +0000 UTC m=+5109.975151648" lastFinishedPulling="2026-02-14 15:17:21.38341755 +0000 UTC m=+5113.409407041" observedRunningTime="2026-02-14 15:17:22.018476312 +0000 UTC m=+5114.044465803" watchObservedRunningTime="2026-02-14 15:17:22.019474481 +0000 UTC m=+5114.045463962" Feb 14 15:17:22 crc kubenswrapper[4750]: I0214 15:17:22.741769 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:17:22 crc kubenswrapper[4750]: E0214 15:17:22.742392 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:17:23 crc kubenswrapper[4750]: I0214 15:17:23.020396 4750 generic.go:334] "Generic (PLEG): container finished" podID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerID="955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e" exitCode=0 Feb 14 15:17:23 crc kubenswrapper[4750]: I0214 15:17:23.020466 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq8gl" event={"ID":"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868","Type":"ContainerDied","Data":"955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e"} Feb 14 15:17:24 crc kubenswrapper[4750]: I0214 15:17:24.051309 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq8gl" event={"ID":"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868","Type":"ContainerStarted","Data":"cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6"} Feb 14 15:17:25 crc kubenswrapper[4750]: I0214 15:17:25.376565 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:25 crc kubenswrapper[4750]: I0214 15:17:25.376825 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:26 crc kubenswrapper[4750]: I0214 15:17:26.422703 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d7bxt" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerName="registry-server" probeResult="failure" output=< Feb 14 15:17:26 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:17:26 crc kubenswrapper[4750]: > Feb 14 15:17:27 crc kubenswrapper[4750]: I0214 15:17:27.546497 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:27 crc kubenswrapper[4750]: I0214 15:17:27.546859 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:29 crc kubenswrapper[4750]: I0214 15:17:29.003947 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nq8gl" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerName="registry-server" probeResult="failure" output=< Feb 14 15:17:29 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:17:29 crc kubenswrapper[4750]: > Feb 14 15:17:35 crc kubenswrapper[4750]: I0214 15:17:35.443422 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:35 crc kubenswrapper[4750]: I0214 15:17:35.476027 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nq8gl" podStartSLOduration=15.038857331 podStartE2EDuration="18.476004535s" podCreationTimestamp="2026-02-14 15:17:17 +0000 UTC" firstStartedPulling="2026-02-14 15:17:19.970786716 +0000 UTC m=+5111.996776197" lastFinishedPulling="2026-02-14 15:17:23.40793393 +0000 UTC m=+5115.433923401" observedRunningTime="2026-02-14 15:17:24.076350318 +0000 UTC m=+5116.102339799" watchObservedRunningTime="2026-02-14 15:17:35.476004535 +0000 UTC m=+5127.501994026" Feb 14 15:17:35 crc kubenswrapper[4750]: I0214 15:17:35.499786 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:35 crc kubenswrapper[4750]: I0214 15:17:35.700075 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d7bxt"] Feb 14 15:17:36 crc kubenswrapper[4750]: I0214 15:17:36.742698 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:17:36 crc kubenswrapper[4750]: E0214 15:17:36.744158 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.234186 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d7bxt" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerName="registry-server" containerID="cri-o://10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8" gracePeriod=2 Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.617715 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.692573 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.900467 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.907147 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9txx\" (UniqueName: \"kubernetes.io/projected/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-kube-api-access-p9txx\") pod \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.907310 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-catalog-content\") pod \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.907588 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-utilities\") pod \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\" (UID: \"5f3fa922-1752-437b-a6f4-0246c8f6cbbf\") " Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.908246 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-utilities" (OuterVolumeSpecName: "utilities") pod "5f3fa922-1752-437b-a6f4-0246c8f6cbbf" (UID: "5f3fa922-1752-437b-a6f4-0246c8f6cbbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.908685 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.920594 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-kube-api-access-p9txx" (OuterVolumeSpecName: "kube-api-access-p9txx") pod "5f3fa922-1752-437b-a6f4-0246c8f6cbbf" (UID: "5f3fa922-1752-437b-a6f4-0246c8f6cbbf"). InnerVolumeSpecName "kube-api-access-p9txx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:17:37 crc kubenswrapper[4750]: I0214 15:17:37.966254 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f3fa922-1752-437b-a6f4-0246c8f6cbbf" (UID: "5f3fa922-1752-437b-a6f4-0246c8f6cbbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.011844 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9txx\" (UniqueName: \"kubernetes.io/projected/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-kube-api-access-p9txx\") on node \"crc\" DevicePath \"\"" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.011888 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3fa922-1752-437b-a6f4-0246c8f6cbbf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.090328 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nq8gl"] Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.252530 4750 generic.go:334] "Generic (PLEG): container finished" podID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerID="10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8" exitCode=0 Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.252620 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7bxt" event={"ID":"5f3fa922-1752-437b-a6f4-0246c8f6cbbf","Type":"ContainerDied","Data":"10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8"} Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.252644 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7bxt" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.252695 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7bxt" event={"ID":"5f3fa922-1752-437b-a6f4-0246c8f6cbbf","Type":"ContainerDied","Data":"d381c5e5f1e5d8a16417ac4ae2eeaf884a24f6195304cdfb61299118f22d67a1"} Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.252724 4750 scope.go:117] "RemoveContainer" containerID="10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.305246 4750 scope.go:117] "RemoveContainer" containerID="e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.319218 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d7bxt"] Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.331244 4750 scope.go:117] "RemoveContainer" containerID="002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.331322 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d7bxt"] Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.406977 4750 scope.go:117] "RemoveContainer" containerID="10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8" Feb 14 15:17:38 crc kubenswrapper[4750]: E0214 15:17:38.407472 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8\": container with ID starting with 10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8 not found: ID does not exist" containerID="10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.407557 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8"} err="failed to get container status \"10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8\": rpc error: code = NotFound desc = could not find container \"10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8\": container with ID starting with 10b9e79cd5670a699331e81b026fce7a13bba58402f1570d7a226bab415c02e8 not found: ID does not exist" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.407598 4750 scope.go:117] "RemoveContainer" containerID="e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509" Feb 14 15:17:38 crc kubenswrapper[4750]: E0214 15:17:38.408065 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509\": container with ID starting with e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509 not found: ID does not exist" containerID="e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.408154 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509"} err="failed to get container status \"e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509\": rpc error: code = NotFound desc = could not find container \"e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509\": container with ID starting with e9eadee918625d9dc08429a60a0e86d3327dbadf40d650cc91068ae25c804509 not found: ID does not exist" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.408201 4750 scope.go:117] "RemoveContainer" containerID="002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd" Feb 14 15:17:38 crc kubenswrapper[4750]: E0214 15:17:38.408529 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd\": container with ID starting with 002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd not found: ID does not exist" containerID="002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.408560 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd"} err="failed to get container status \"002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd\": rpc error: code = NotFound desc = could not find container \"002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd\": container with ID starting with 002b73effd733c8c8872281f8cdf5ed1b97f1e36616bb2cf4566c6811fd757fd not found: ID does not exist" Feb 14 15:17:38 crc kubenswrapper[4750]: I0214 15:17:38.761691 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" path="/var/lib/kubelet/pods/5f3fa922-1752-437b-a6f4-0246c8f6cbbf/volumes" Feb 14 15:17:39 crc kubenswrapper[4750]: I0214 15:17:39.270087 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nq8gl" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerName="registry-server" containerID="cri-o://cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6" gracePeriod=2 Feb 14 15:17:39 crc kubenswrapper[4750]: I0214 15:17:39.819520 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:39 crc kubenswrapper[4750]: I0214 15:17:39.957433 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-catalog-content\") pod \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " Feb 14 15:17:39 crc kubenswrapper[4750]: I0214 15:17:39.957608 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-utilities\") pod \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " Feb 14 15:17:39 crc kubenswrapper[4750]: I0214 15:17:39.957682 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnfh5\" (UniqueName: \"kubernetes.io/projected/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-kube-api-access-wnfh5\") pod \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\" (UID: \"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868\") " Feb 14 15:17:39 crc kubenswrapper[4750]: I0214 15:17:39.958352 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-utilities" (OuterVolumeSpecName: "utilities") pod "2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" (UID: "2aa6ff4d-bfe7-44cf-8ddf-607c3b211868"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:17:39 crc kubenswrapper[4750]: I0214 15:17:39.965042 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-kube-api-access-wnfh5" (OuterVolumeSpecName: "kube-api-access-wnfh5") pod "2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" (UID: "2aa6ff4d-bfe7-44cf-8ddf-607c3b211868"). InnerVolumeSpecName "kube-api-access-wnfh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.012094 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" (UID: "2aa6ff4d-bfe7-44cf-8ddf-607c3b211868"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.060631 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.060668 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.060678 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnfh5\" (UniqueName: \"kubernetes.io/projected/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868-kube-api-access-wnfh5\") on node \"crc\" DevicePath \"\"" Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.287870 4750 generic.go:334] "Generic (PLEG): container finished" podID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerID="cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6" exitCode=0 Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.291272 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq8gl" event={"ID":"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868","Type":"ContainerDied","Data":"cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6"} Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.291458 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq8gl" event={"ID":"2aa6ff4d-bfe7-44cf-8ddf-607c3b211868","Type":"ContainerDied","Data":"b30fa49fc58e189199304c6dd28ede873fcb5ff7aaf17d6396197f7ca9ce05d9"} Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.291500 4750 scope.go:117] "RemoveContainer" containerID="cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6" Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.291665 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq8gl" Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.319487 4750 scope.go:117] "RemoveContainer" containerID="955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e" Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.344455 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nq8gl"] Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.354682 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nq8gl"] Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.771061 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" path="/var/lib/kubelet/pods/2aa6ff4d-bfe7-44cf-8ddf-607c3b211868/volumes" Feb 14 15:17:40 crc kubenswrapper[4750]: I0214 15:17:40.977351 4750 scope.go:117] "RemoveContainer" containerID="6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af" Feb 14 15:17:41 crc kubenswrapper[4750]: I0214 15:17:41.033637 4750 scope.go:117] "RemoveContainer" containerID="cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6" Feb 14 15:17:41 crc kubenswrapper[4750]: E0214 15:17:41.034082 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6\": container with ID starting with cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6 not found: ID does not exist" containerID="cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6" Feb 14 15:17:41 crc kubenswrapper[4750]: I0214 15:17:41.034132 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6"} err="failed to get container status \"cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6\": rpc error: code = NotFound desc = could not find container \"cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6\": container with ID starting with cb9cee0f63911c9a28867188d2b3052966f2166f97995791c8ac1a254ecb73b6 not found: ID does not exist" Feb 14 15:17:41 crc kubenswrapper[4750]: I0214 15:17:41.034161 4750 scope.go:117] "RemoveContainer" containerID="955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e" Feb 14 15:17:41 crc kubenswrapper[4750]: E0214 15:17:41.034534 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e\": container with ID starting with 955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e not found: ID does not exist" containerID="955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e" Feb 14 15:17:41 crc kubenswrapper[4750]: I0214 15:17:41.034559 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e"} err="failed to get container status \"955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e\": rpc error: code = NotFound desc = could not find container \"955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e\": container with ID starting with 955b25a0f7c49a05ff149c2bfefc68e2cc8f7d67f3ecbb45c5d71d6f0af40d7e not found: ID does not exist" Feb 14 15:17:41 crc kubenswrapper[4750]: I0214 15:17:41.034576 4750 scope.go:117] "RemoveContainer" containerID="6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af" Feb 14 15:17:41 crc kubenswrapper[4750]: E0214 15:17:41.034782 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af\": container with ID starting with 6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af not found: ID does not exist" containerID="6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af" Feb 14 15:17:41 crc kubenswrapper[4750]: I0214 15:17:41.034814 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af"} err="failed to get container status \"6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af\": rpc error: code = NotFound desc = could not find container \"6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af\": container with ID starting with 6b11cee350cdebc294cf50191cb387761989fa6c31405b0cd1e12a32a8bba4af not found: ID does not exist" Feb 14 15:17:47 crc kubenswrapper[4750]: I0214 15:17:47.743251 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:17:47 crc kubenswrapper[4750]: E0214 15:17:47.744709 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:17:58 crc kubenswrapper[4750]: I0214 15:17:58.749298 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:17:58 crc kubenswrapper[4750]: E0214 15:17:58.750134 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:18:12 crc kubenswrapper[4750]: I0214 15:18:12.742429 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:18:12 crc kubenswrapper[4750]: E0214 15:18:12.743204 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:18:23 crc kubenswrapper[4750]: I0214 15:18:23.742108 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:18:23 crc kubenswrapper[4750]: E0214 15:18:23.742921 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:18:36 crc kubenswrapper[4750]: I0214 15:18:36.743089 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:18:36 crc kubenswrapper[4750]: E0214 15:18:36.744169 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:18:50 crc kubenswrapper[4750]: I0214 15:18:50.742851 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:18:50 crc kubenswrapper[4750]: E0214 15:18:50.743858 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:19:02 crc kubenswrapper[4750]: I0214 15:19:02.742409 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:19:02 crc kubenswrapper[4750]: E0214 15:19:02.743434 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:19:14 crc kubenswrapper[4750]: I0214 15:19:14.742227 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:19:14 crc kubenswrapper[4750]: E0214 15:19:14.744839 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:19:27 crc kubenswrapper[4750]: I0214 15:19:27.742759 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:19:27 crc kubenswrapper[4750]: E0214 15:19:27.744927 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:19:40 crc kubenswrapper[4750]: I0214 15:19:40.742839 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:19:40 crc kubenswrapper[4750]: E0214 15:19:40.743941 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:19:51 crc kubenswrapper[4750]: I0214 15:19:51.742460 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:19:51 crc kubenswrapper[4750]: E0214 15:19:51.743219 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:20:06 crc kubenswrapper[4750]: I0214 15:20:06.742876 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:20:06 crc kubenswrapper[4750]: E0214 15:20:06.744199 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:20:21 crc kubenswrapper[4750]: I0214 15:20:21.741461 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:20:21 crc kubenswrapper[4750]: E0214 15:20:21.742267 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:20:33 crc kubenswrapper[4750]: I0214 15:20:33.742556 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:20:33 crc kubenswrapper[4750]: E0214 15:20:33.743887 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:20:44 crc kubenswrapper[4750]: I0214 15:20:44.742594 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:20:44 crc kubenswrapper[4750]: E0214 15:20:44.743842 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:20:56 crc kubenswrapper[4750]: I0214 15:20:56.744561 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:20:56 crc kubenswrapper[4750]: E0214 15:20:56.748504 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:21:11 crc kubenswrapper[4750]: I0214 15:21:11.742903 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:21:12 crc kubenswrapper[4750]: I0214 15:21:12.869455 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"f5488fb9d81446672df7634f0f1aae142f55b501228ad973a1bdbc73b5acb76b"} Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.106432 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4fhqr"] Feb 14 15:21:15 crc kubenswrapper[4750]: E0214 15:21:15.107535 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerName="extract-content" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.107552 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerName="extract-content" Feb 14 15:21:15 crc kubenswrapper[4750]: E0214 15:21:15.107566 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerName="extract-utilities" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.107575 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerName="extract-utilities" Feb 14 15:21:15 crc kubenswrapper[4750]: E0214 15:21:15.107594 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerName="extract-utilities" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.107603 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerName="extract-utilities" Feb 14 15:21:15 crc kubenswrapper[4750]: E0214 15:21:15.107632 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerName="registry-server" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.107641 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerName="registry-server" Feb 14 15:21:15 crc kubenswrapper[4750]: E0214 15:21:15.107657 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerName="registry-server" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.107665 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerName="registry-server" Feb 14 15:21:15 crc kubenswrapper[4750]: E0214 15:21:15.107706 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerName="extract-content" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.107717 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerName="extract-content" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.108001 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3fa922-1752-437b-a6f4-0246c8f6cbbf" containerName="registry-server" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.108038 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa6ff4d-bfe7-44cf-8ddf-607c3b211868" containerName="registry-server" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.112608 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.121709 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fhqr"] Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.242148 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-utilities\") pod \"redhat-operators-4fhqr\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.242270 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-catalog-content\") pod \"redhat-operators-4fhqr\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.242397 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q89zs\" (UniqueName: \"kubernetes.io/projected/80119bba-2571-47b1-a3fc-263bd4341e74-kube-api-access-q89zs\") pod \"redhat-operators-4fhqr\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.344522 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-utilities\") pod \"redhat-operators-4fhqr\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.344637 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-catalog-content\") pod \"redhat-operators-4fhqr\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.344717 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q89zs\" (UniqueName: \"kubernetes.io/projected/80119bba-2571-47b1-a3fc-263bd4341e74-kube-api-access-q89zs\") pod \"redhat-operators-4fhqr\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.345187 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-catalog-content\") pod \"redhat-operators-4fhqr\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.345229 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-utilities\") pod \"redhat-operators-4fhqr\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.365238 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q89zs\" (UniqueName: \"kubernetes.io/projected/80119bba-2571-47b1-a3fc-263bd4341e74-kube-api-access-q89zs\") pod \"redhat-operators-4fhqr\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:15 crc kubenswrapper[4750]: I0214 15:21:15.437066 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:16 crc kubenswrapper[4750]: I0214 15:21:16.063273 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fhqr"] Feb 14 15:21:16 crc kubenswrapper[4750]: W0214 15:21:16.065362 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80119bba_2571_47b1_a3fc_263bd4341e74.slice/crio-cebf3cedc3481b51ec6231b577d1a321b19a04ff3a4abc49eeb09c69626b900f WatchSource:0}: Error finding container cebf3cedc3481b51ec6231b577d1a321b19a04ff3a4abc49eeb09c69626b900f: Status 404 returned error can't find the container with id cebf3cedc3481b51ec6231b577d1a321b19a04ff3a4abc49eeb09c69626b900f Feb 14 15:21:16 crc kubenswrapper[4750]: I0214 15:21:16.929534 4750 generic.go:334] "Generic (PLEG): container finished" podID="80119bba-2571-47b1-a3fc-263bd4341e74" containerID="10dab733075a7281a8fe6a624008b464d000d4ca1618d12f67b489e87c974ba7" exitCode=0 Feb 14 15:21:16 crc kubenswrapper[4750]: I0214 15:21:16.929624 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhqr" event={"ID":"80119bba-2571-47b1-a3fc-263bd4341e74","Type":"ContainerDied","Data":"10dab733075a7281a8fe6a624008b464d000d4ca1618d12f67b489e87c974ba7"} Feb 14 15:21:16 crc kubenswrapper[4750]: I0214 15:21:16.930478 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhqr" event={"ID":"80119bba-2571-47b1-a3fc-263bd4341e74","Type":"ContainerStarted","Data":"cebf3cedc3481b51ec6231b577d1a321b19a04ff3a4abc49eeb09c69626b900f"} Feb 14 15:21:18 crc kubenswrapper[4750]: I0214 15:21:18.953647 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhqr" event={"ID":"80119bba-2571-47b1-a3fc-263bd4341e74","Type":"ContainerStarted","Data":"8367c3f61ac6e14e0a8ebf8d3dbb52ba569dc0cfcb53439e2f0e45d5b8b41bf4"} Feb 14 15:21:25 crc kubenswrapper[4750]: I0214 15:21:25.597432 4750 generic.go:334] "Generic (PLEG): container finished" podID="80119bba-2571-47b1-a3fc-263bd4341e74" containerID="8367c3f61ac6e14e0a8ebf8d3dbb52ba569dc0cfcb53439e2f0e45d5b8b41bf4" exitCode=0 Feb 14 15:21:25 crc kubenswrapper[4750]: I0214 15:21:25.597787 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhqr" event={"ID":"80119bba-2571-47b1-a3fc-263bd4341e74","Type":"ContainerDied","Data":"8367c3f61ac6e14e0a8ebf8d3dbb52ba569dc0cfcb53439e2f0e45d5b8b41bf4"} Feb 14 15:21:26 crc kubenswrapper[4750]: I0214 15:21:26.613134 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhqr" event={"ID":"80119bba-2571-47b1-a3fc-263bd4341e74","Type":"ContainerStarted","Data":"16776db71a0e7a396ffce5b90a911e3d96f3fa3200728d9d15daed2b130ce0f2"} Feb 14 15:21:26 crc kubenswrapper[4750]: I0214 15:21:26.642255 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4fhqr" podStartSLOduration=2.578705932 podStartE2EDuration="11.642231632s" podCreationTimestamp="2026-02-14 15:21:15 +0000 UTC" firstStartedPulling="2026-02-14 15:21:16.93274486 +0000 UTC m=+5348.958734341" lastFinishedPulling="2026-02-14 15:21:25.99627055 +0000 UTC m=+5358.022260041" observedRunningTime="2026-02-14 15:21:26.631414124 +0000 UTC m=+5358.657403605" watchObservedRunningTime="2026-02-14 15:21:26.642231632 +0000 UTC m=+5358.668221133" Feb 14 15:21:35 crc kubenswrapper[4750]: I0214 15:21:35.438018 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:35 crc kubenswrapper[4750]: I0214 15:21:35.438514 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:36 crc kubenswrapper[4750]: I0214 15:21:36.503493 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fhqr" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="registry-server" probeResult="failure" output=< Feb 14 15:21:36 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:21:36 crc kubenswrapper[4750]: > Feb 14 15:21:46 crc kubenswrapper[4750]: I0214 15:21:46.523741 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fhqr" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="registry-server" probeResult="failure" output=< Feb 14 15:21:46 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:21:46 crc kubenswrapper[4750]: > Feb 14 15:21:55 crc kubenswrapper[4750]: I0214 15:21:55.901210 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:55 crc kubenswrapper[4750]: I0214 15:21:55.963161 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:56 crc kubenswrapper[4750]: I0214 15:21:56.141239 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fhqr"] Feb 14 15:21:57 crc kubenswrapper[4750]: I0214 15:21:57.023182 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4fhqr" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="registry-server" containerID="cri-o://16776db71a0e7a396ffce5b90a911e3d96f3fa3200728d9d15daed2b130ce0f2" gracePeriod=2 Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.037551 4750 generic.go:334] "Generic (PLEG): container finished" podID="80119bba-2571-47b1-a3fc-263bd4341e74" containerID="16776db71a0e7a396ffce5b90a911e3d96f3fa3200728d9d15daed2b130ce0f2" exitCode=0 Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.037837 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhqr" event={"ID":"80119bba-2571-47b1-a3fc-263bd4341e74","Type":"ContainerDied","Data":"16776db71a0e7a396ffce5b90a911e3d96f3fa3200728d9d15daed2b130ce0f2"} Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.212340 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.325794 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-catalog-content\") pod \"80119bba-2571-47b1-a3fc-263bd4341e74\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.326157 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q89zs\" (UniqueName: \"kubernetes.io/projected/80119bba-2571-47b1-a3fc-263bd4341e74-kube-api-access-q89zs\") pod \"80119bba-2571-47b1-a3fc-263bd4341e74\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.326239 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-utilities\") pod \"80119bba-2571-47b1-a3fc-263bd4341e74\" (UID: \"80119bba-2571-47b1-a3fc-263bd4341e74\") " Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.327662 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-utilities" (OuterVolumeSpecName: "utilities") pod "80119bba-2571-47b1-a3fc-263bd4341e74" (UID: "80119bba-2571-47b1-a3fc-263bd4341e74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.335659 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80119bba-2571-47b1-a3fc-263bd4341e74-kube-api-access-q89zs" (OuterVolumeSpecName: "kube-api-access-q89zs") pod "80119bba-2571-47b1-a3fc-263bd4341e74" (UID: "80119bba-2571-47b1-a3fc-263bd4341e74"). InnerVolumeSpecName "kube-api-access-q89zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.430644 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q89zs\" (UniqueName: \"kubernetes.io/projected/80119bba-2571-47b1-a3fc-263bd4341e74-kube-api-access-q89zs\") on node \"crc\" DevicePath \"\"" Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.430705 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.453741 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80119bba-2571-47b1-a3fc-263bd4341e74" (UID: "80119bba-2571-47b1-a3fc-263bd4341e74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:21:58 crc kubenswrapper[4750]: I0214 15:21:58.533299 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80119bba-2571-47b1-a3fc-263bd4341e74-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:21:59 crc kubenswrapper[4750]: I0214 15:21:59.054467 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhqr" event={"ID":"80119bba-2571-47b1-a3fc-263bd4341e74","Type":"ContainerDied","Data":"cebf3cedc3481b51ec6231b577d1a321b19a04ff3a4abc49eeb09c69626b900f"} Feb 14 15:21:59 crc kubenswrapper[4750]: I0214 15:21:59.054884 4750 scope.go:117] "RemoveContainer" containerID="16776db71a0e7a396ffce5b90a911e3d96f3fa3200728d9d15daed2b130ce0f2" Feb 14 15:21:59 crc kubenswrapper[4750]: I0214 15:21:59.055100 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fhqr" Feb 14 15:21:59 crc kubenswrapper[4750]: I0214 15:21:59.093329 4750 scope.go:117] "RemoveContainer" containerID="8367c3f61ac6e14e0a8ebf8d3dbb52ba569dc0cfcb53439e2f0e45d5b8b41bf4" Feb 14 15:21:59 crc kubenswrapper[4750]: I0214 15:21:59.096274 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fhqr"] Feb 14 15:21:59 crc kubenswrapper[4750]: I0214 15:21:59.119263 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4fhqr"] Feb 14 15:21:59 crc kubenswrapper[4750]: I0214 15:21:59.133455 4750 scope.go:117] "RemoveContainer" containerID="10dab733075a7281a8fe6a624008b464d000d4ca1618d12f67b489e87c974ba7" Feb 14 15:22:00 crc kubenswrapper[4750]: I0214 15:22:00.754339 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" path="/var/lib/kubelet/pods/80119bba-2571-47b1-a3fc-263bd4341e74/volumes" Feb 14 15:23:30 crc kubenswrapper[4750]: I0214 15:23:30.128697 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:23:30 crc kubenswrapper[4750]: I0214 15:23:30.129451 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:24:00 crc kubenswrapper[4750]: I0214 15:24:00.128801 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:24:00 crc kubenswrapper[4750]: I0214 15:24:00.129819 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:24:30 crc kubenswrapper[4750]: I0214 15:24:30.129509 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:24:30 crc kubenswrapper[4750]: I0214 15:24:30.131056 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:24:30 crc kubenswrapper[4750]: I0214 15:24:30.131202 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 15:24:30 crc kubenswrapper[4750]: I0214 15:24:30.132192 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5488fb9d81446672df7634f0f1aae142f55b501228ad973a1bdbc73b5acb76b"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 15:24:30 crc kubenswrapper[4750]: I0214 15:24:30.132349 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://f5488fb9d81446672df7634f0f1aae142f55b501228ad973a1bdbc73b5acb76b" gracePeriod=600 Feb 14 15:24:31 crc kubenswrapper[4750]: I0214 15:24:31.047913 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="f5488fb9d81446672df7634f0f1aae142f55b501228ad973a1bdbc73b5acb76b" exitCode=0 Feb 14 15:24:31 crc kubenswrapper[4750]: I0214 15:24:31.047987 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"f5488fb9d81446672df7634f0f1aae142f55b501228ad973a1bdbc73b5acb76b"} Feb 14 15:24:31 crc kubenswrapper[4750]: I0214 15:24:31.048486 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b"} Feb 14 15:24:31 crc kubenswrapper[4750]: I0214 15:24:31.048510 4750 scope.go:117] "RemoveContainer" containerID="26cebc85a130c4076813ee9b07697b8d2dff6a8d103fac484c7cc5e35953c524" Feb 14 15:26:04 crc kubenswrapper[4750]: I0214 15:26:04.288002 4750 generic.go:334] "Generic (PLEG): container finished" podID="e4d753a9-5bca-4940-9aa9-72a57f4f32a2" containerID="f96e306a1f1fb780d83613f69bfb43c6c97ce1343c341df89356163b83284a2d" exitCode=0 Feb 14 15:26:04 crc kubenswrapper[4750]: I0214 15:26:04.288203 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e4d753a9-5bca-4940-9aa9-72a57f4f32a2","Type":"ContainerDied","Data":"f96e306a1f1fb780d83613f69bfb43c6c97ce1343c341df89356163b83284a2d"} Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.801241 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.921618 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.921767 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config-secret\") pod \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.921813 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-temporary\") pod \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.921849 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-workdir\") pod \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.921982 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ca-certs\") pod \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.922039 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plfb5\" (UniqueName: \"kubernetes.io/projected/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-kube-api-access-plfb5\") pod \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.922264 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-config-data\") pod \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.922306 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ssh-key\") pod \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.922399 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config\") pod \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\" (UID: \"e4d753a9-5bca-4940-9aa9-72a57f4f32a2\") " Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.923782 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e4d753a9-5bca-4940-9aa9-72a57f4f32a2" (UID: "e4d753a9-5bca-4940-9aa9-72a57f4f32a2"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.925050 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-config-data" (OuterVolumeSpecName: "config-data") pod "e4d753a9-5bca-4940-9aa9-72a57f4f32a2" (UID: "e4d753a9-5bca-4940-9aa9-72a57f4f32a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 15:26:05 crc kubenswrapper[4750]: I0214 15:26:05.931901 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e4d753a9-5bca-4940-9aa9-72a57f4f32a2" (UID: "e4d753a9-5bca-4940-9aa9-72a57f4f32a2"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.026145 4750 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.026195 4750 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.026212 4750 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.324204 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e4d753a9-5bca-4940-9aa9-72a57f4f32a2","Type":"ContainerDied","Data":"32f71c78ce9db0876b18bce2074c9ecadd6dc999e6e62ab73a24bf11bc776d05"} Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.324251 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f71c78ce9db0876b18bce2074c9ecadd6dc999e6e62ab73a24bf11bc776d05" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.324325 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.790506 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e4d753a9-5bca-4940-9aa9-72a57f4f32a2" (UID: "e4d753a9-5bca-4940-9aa9-72a57f4f32a2"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.798034 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-kube-api-access-plfb5" (OuterVolumeSpecName: "kube-api-access-plfb5") pod "e4d753a9-5bca-4940-9aa9-72a57f4f32a2" (UID: "e4d753a9-5bca-4940-9aa9-72a57f4f32a2"). InnerVolumeSpecName "kube-api-access-plfb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.834628 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e4d753a9-5bca-4940-9aa9-72a57f4f32a2" (UID: "e4d753a9-5bca-4940-9aa9-72a57f4f32a2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.843530 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e4d753a9-5bca-4940-9aa9-72a57f4f32a2" (UID: "e4d753a9-5bca-4940-9aa9-72a57f4f32a2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.848587 4750 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.849710 4750 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.849751 4750 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.849771 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plfb5\" (UniqueName: \"kubernetes.io/projected/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-kube-api-access-plfb5\") on node \"crc\" DevicePath \"\"" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.859827 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e4d753a9-5bca-4940-9aa9-72a57f4f32a2" (UID: "e4d753a9-5bca-4940-9aa9-72a57f4f32a2"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.879716 4750 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.889509 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e4d753a9-5bca-4940-9aa9-72a57f4f32a2" (UID: "e4d753a9-5bca-4940-9aa9-72a57f4f32a2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.952446 4750 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.952475 4750 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e4d753a9-5bca-4940-9aa9-72a57f4f32a2-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 14 15:26:06 crc kubenswrapper[4750]: I0214 15:26:06.952487 4750 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.507650 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 15:26:14 crc kubenswrapper[4750]: E0214 15:26:14.509192 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="extract-utilities" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.509218 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="extract-utilities" Feb 14 15:26:14 crc kubenswrapper[4750]: E0214 15:26:14.509277 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d753a9-5bca-4940-9aa9-72a57f4f32a2" containerName="tempest-tests-tempest-tests-runner" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.509293 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d753a9-5bca-4940-9aa9-72a57f4f32a2" containerName="tempest-tests-tempest-tests-runner" Feb 14 15:26:14 crc kubenswrapper[4750]: E0214 15:26:14.509323 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="extract-content" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.509338 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="extract-content" Feb 14 15:26:14 crc kubenswrapper[4750]: E0214 15:26:14.509392 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="registry-server" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.509405 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="registry-server" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.509855 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d753a9-5bca-4940-9aa9-72a57f4f32a2" containerName="tempest-tests-tempest-tests-runner" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.509902 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="80119bba-2571-47b1-a3fc-263bd4341e74" containerName="registry-server" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.512770 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.524135 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.531710 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j6dvp" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.665227 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.665288 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkttk\" (UniqueName: \"kubernetes.io/projected/6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5-kube-api-access-xkttk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.767430 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkttk\" (UniqueName: \"kubernetes.io/projected/6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5-kube-api-access-xkttk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.767730 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.769682 4750 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.799785 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkttk\" (UniqueName: \"kubernetes.io/projected/6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5-kube-api-access-xkttk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.805163 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 15:26:14 crc kubenswrapper[4750]: I0214 15:26:14.851176 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 15:26:15 crc kubenswrapper[4750]: I0214 15:26:15.338610 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 15:26:15 crc kubenswrapper[4750]: I0214 15:26:15.345960 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 15:26:15 crc kubenswrapper[4750]: I0214 15:26:15.448090 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5","Type":"ContainerStarted","Data":"7c8d64c71776ed7f957fa02dafe7d0a768a69cf948ec8e8ffabb674315ecc453"} Feb 14 15:26:17 crc kubenswrapper[4750]: I0214 15:26:17.476301 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5","Type":"ContainerStarted","Data":"fbd5c342c7ba4e483b13d61f5eff35b1dcc80d4e324a187153aa482f96b3b6a5"} Feb 14 15:26:17 crc kubenswrapper[4750]: I0214 15:26:17.517881 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.341952954 podStartE2EDuration="3.51785371s" podCreationTimestamp="2026-02-14 15:26:14 +0000 UTC" firstStartedPulling="2026-02-14 15:26:15.345542656 +0000 UTC m=+5647.371532177" lastFinishedPulling="2026-02-14 15:26:16.521443442 +0000 UTC m=+5648.547432933" observedRunningTime="2026-02-14 15:26:17.499725785 +0000 UTC m=+5649.525715286" watchObservedRunningTime="2026-02-14 15:26:17.51785371 +0000 UTC m=+5649.543843231" Feb 14 15:26:30 crc kubenswrapper[4750]: I0214 15:26:30.128890 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:26:30 crc kubenswrapper[4750]: I0214 15:26:30.129585 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.099538 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2q8d6/must-gather-schln"] Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.101713 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.103914 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2q8d6"/"default-dockercfg-fdwwm" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.107580 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2q8d6"/"openshift-service-ca.crt" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.107592 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2q8d6"/"kube-root-ca.crt" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.113981 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2q8d6/must-gather-schln"] Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.268146 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f3d51424-6083-455e-b9be-7ddc3b521a8d-must-gather-output\") pod \"must-gather-schln\" (UID: \"f3d51424-6083-455e-b9be-7ddc3b521a8d\") " pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.268349 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tvcd\" (UniqueName: \"kubernetes.io/projected/f3d51424-6083-455e-b9be-7ddc3b521a8d-kube-api-access-2tvcd\") pod \"must-gather-schln\" (UID: \"f3d51424-6083-455e-b9be-7ddc3b521a8d\") " pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.369875 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f3d51424-6083-455e-b9be-7ddc3b521a8d-must-gather-output\") pod \"must-gather-schln\" (UID: \"f3d51424-6083-455e-b9be-7ddc3b521a8d\") " pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.370004 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tvcd\" (UniqueName: \"kubernetes.io/projected/f3d51424-6083-455e-b9be-7ddc3b521a8d-kube-api-access-2tvcd\") pod \"must-gather-schln\" (UID: \"f3d51424-6083-455e-b9be-7ddc3b521a8d\") " pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.370422 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f3d51424-6083-455e-b9be-7ddc3b521a8d-must-gather-output\") pod \"must-gather-schln\" (UID: \"f3d51424-6083-455e-b9be-7ddc3b521a8d\") " pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.392549 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tvcd\" (UniqueName: \"kubernetes.io/projected/f3d51424-6083-455e-b9be-7ddc3b521a8d-kube-api-access-2tvcd\") pod \"must-gather-schln\" (UID: \"f3d51424-6083-455e-b9be-7ddc3b521a8d\") " pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.418269 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:26:45 crc kubenswrapper[4750]: I0214 15:26:45.900362 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2q8d6/must-gather-schln"] Feb 14 15:26:46 crc kubenswrapper[4750]: I0214 15:26:46.861022 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/must-gather-schln" event={"ID":"f3d51424-6083-455e-b9be-7ddc3b521a8d","Type":"ContainerStarted","Data":"aaca1bc1853a4f8f1688f3d02435ff143a58bba763aa31872c2a106c511c1d54"} Feb 14 15:26:53 crc kubenswrapper[4750]: I0214 15:26:53.948024 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/must-gather-schln" event={"ID":"f3d51424-6083-455e-b9be-7ddc3b521a8d","Type":"ContainerStarted","Data":"91e2acc658592e2c2b31addc3926767b04d268e06a70b5ed2075e8d119d557e2"} Feb 14 15:26:53 crc kubenswrapper[4750]: I0214 15:26:53.949839 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/must-gather-schln" event={"ID":"f3d51424-6083-455e-b9be-7ddc3b521a8d","Type":"ContainerStarted","Data":"1abd7d6d262ba9b37a2320d2f9c56c9186a56d204949e476d28b78acfe56e82d"} Feb 14 15:26:53 crc kubenswrapper[4750]: I0214 15:26:53.972695 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2q8d6/must-gather-schln" podStartSLOduration=1.752175606 podStartE2EDuration="8.972676227s" podCreationTimestamp="2026-02-14 15:26:45 +0000 UTC" firstStartedPulling="2026-02-14 15:26:45.89028985 +0000 UTC m=+5677.916279331" lastFinishedPulling="2026-02-14 15:26:53.110790471 +0000 UTC m=+5685.136779952" observedRunningTime="2026-02-14 15:26:53.967205812 +0000 UTC m=+5685.993195293" watchObservedRunningTime="2026-02-14 15:26:53.972676227 +0000 UTC m=+5685.998665708" Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.102868 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2q8d6/crc-debug-zgqkv"] Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.104751 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.129323 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.129536 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.236768 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58vhc\" (UniqueName: \"kubernetes.io/projected/e0e4c397-d891-49d6-9b55-8120044646b1-kube-api-access-58vhc\") pod \"crc-debug-zgqkv\" (UID: \"e0e4c397-d891-49d6-9b55-8120044646b1\") " pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.237436 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e4c397-d891-49d6-9b55-8120044646b1-host\") pod \"crc-debug-zgqkv\" (UID: \"e0e4c397-d891-49d6-9b55-8120044646b1\") " pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.340213 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e4c397-d891-49d6-9b55-8120044646b1-host\") pod \"crc-debug-zgqkv\" (UID: \"e0e4c397-d891-49d6-9b55-8120044646b1\") " pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.340564 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58vhc\" (UniqueName: \"kubernetes.io/projected/e0e4c397-d891-49d6-9b55-8120044646b1-kube-api-access-58vhc\") pod \"crc-debug-zgqkv\" (UID: \"e0e4c397-d891-49d6-9b55-8120044646b1\") " pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.341196 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e4c397-d891-49d6-9b55-8120044646b1-host\") pod \"crc-debug-zgqkv\" (UID: \"e0e4c397-d891-49d6-9b55-8120044646b1\") " pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.363555 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58vhc\" (UniqueName: \"kubernetes.io/projected/e0e4c397-d891-49d6-9b55-8120044646b1-kube-api-access-58vhc\") pod \"crc-debug-zgqkv\" (UID: \"e0e4c397-d891-49d6-9b55-8120044646b1\") " pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:27:00 crc kubenswrapper[4750]: I0214 15:27:00.424321 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:27:01 crc kubenswrapper[4750]: I0214 15:27:01.027807 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" event={"ID":"e0e4c397-d891-49d6-9b55-8120044646b1","Type":"ContainerStarted","Data":"f4789065eab88620fde2ef26c35299f6e9299d9a8e677a198abecc8335c4267b"} Feb 14 15:27:11 crc kubenswrapper[4750]: I0214 15:27:11.139891 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" event={"ID":"e0e4c397-d891-49d6-9b55-8120044646b1","Type":"ContainerStarted","Data":"d44f994654a9535a9d48dec9bd3899d9aa50440713e46fdb5e4dc25157798c2a"} Feb 14 15:27:11 crc kubenswrapper[4750]: I0214 15:27:11.165995 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" podStartSLOduration=0.999409963 podStartE2EDuration="11.165970101s" podCreationTimestamp="2026-02-14 15:27:00 +0000 UTC" firstStartedPulling="2026-02-14 15:27:00.480482961 +0000 UTC m=+5692.506472442" lastFinishedPulling="2026-02-14 15:27:10.647043099 +0000 UTC m=+5702.673032580" observedRunningTime="2026-02-14 15:27:11.152947691 +0000 UTC m=+5703.178937192" watchObservedRunningTime="2026-02-14 15:27:11.165970101 +0000 UTC m=+5703.191959582" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.129535 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.130052 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.130091 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.130617 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.130668 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" gracePeriod=600 Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.304552 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x5wpp"] Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.307678 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.316402 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5wpp"] Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.382872 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-catalog-content\") pod \"certified-operators-x5wpp\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.383781 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-utilities\") pod \"certified-operators-x5wpp\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.383841 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6tvt\" (UniqueName: \"kubernetes.io/projected/fd42577d-6a4a-42c3-b824-947e05d0da59-kube-api-access-c6tvt\") pod \"certified-operators-x5wpp\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.486767 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-catalog-content\") pod \"certified-operators-x5wpp\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.487012 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-utilities\") pod \"certified-operators-x5wpp\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.487087 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6tvt\" (UniqueName: \"kubernetes.io/projected/fd42577d-6a4a-42c3-b824-947e05d0da59-kube-api-access-c6tvt\") pod \"certified-operators-x5wpp\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.487420 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-catalog-content\") pod \"certified-operators-x5wpp\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.487653 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-utilities\") pod \"certified-operators-x5wpp\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.567160 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6tvt\" (UniqueName: \"kubernetes.io/projected/fd42577d-6a4a-42c3-b824-947e05d0da59-kube-api-access-c6tvt\") pod \"certified-operators-x5wpp\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.630255 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.922919 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cnjr8"] Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.926769 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:30 crc kubenswrapper[4750]: I0214 15:27:30.950785 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnjr8"] Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.006532 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-catalog-content\") pod \"redhat-marketplace-cnjr8\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.006613 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpkw9\" (UniqueName: \"kubernetes.io/projected/92eea70a-b0e4-43b6-973c-e632c365a222-kube-api-access-lpkw9\") pod \"redhat-marketplace-cnjr8\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.006695 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-utilities\") pod \"redhat-marketplace-cnjr8\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: E0214 15:27:31.091939 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.108583 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-utilities\") pod \"redhat-marketplace-cnjr8\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.108729 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-catalog-content\") pod \"redhat-marketplace-cnjr8\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.108798 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpkw9\" (UniqueName: \"kubernetes.io/projected/92eea70a-b0e4-43b6-973c-e632c365a222-kube-api-access-lpkw9\") pod \"redhat-marketplace-cnjr8\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.109534 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-catalog-content\") pod \"redhat-marketplace-cnjr8\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.109810 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-utilities\") pod \"redhat-marketplace-cnjr8\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.130082 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpkw9\" (UniqueName: \"kubernetes.io/projected/92eea70a-b0e4-43b6-973c-e632c365a222-kube-api-access-lpkw9\") pod \"redhat-marketplace-cnjr8\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.265178 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.318441 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5wpp"] Feb 14 15:27:31 crc kubenswrapper[4750]: W0214 15:27:31.329287 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd42577d_6a4a_42c3_b824_947e05d0da59.slice/crio-e2f91bc29e0182a461cc2faed6dba3cf16585115013fe0abd7a8b626b7483be8 WatchSource:0}: Error finding container e2f91bc29e0182a461cc2faed6dba3cf16585115013fe0abd7a8b626b7483be8: Status 404 returned error can't find the container with id e2f91bc29e0182a461cc2faed6dba3cf16585115013fe0abd7a8b626b7483be8 Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.440979 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" exitCode=0 Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.441037 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b"} Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.441069 4750 scope.go:117] "RemoveContainer" containerID="f5488fb9d81446672df7634f0f1aae142f55b501228ad973a1bdbc73b5acb76b" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.442131 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:27:31 crc kubenswrapper[4750]: E0214 15:27:31.443798 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.450413 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpp" event={"ID":"fd42577d-6a4a-42c3-b824-947e05d0da59","Type":"ContainerStarted","Data":"e2f91bc29e0182a461cc2faed6dba3cf16585115013fe0abd7a8b626b7483be8"} Feb 14 15:27:31 crc kubenswrapper[4750]: I0214 15:27:31.793677 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnjr8"] Feb 14 15:27:32 crc kubenswrapper[4750]: I0214 15:27:32.463595 4750 generic.go:334] "Generic (PLEG): container finished" podID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerID="7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753" exitCode=0 Feb 14 15:27:32 crc kubenswrapper[4750]: I0214 15:27:32.463654 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpp" event={"ID":"fd42577d-6a4a-42c3-b824-947e05d0da59","Type":"ContainerDied","Data":"7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753"} Feb 14 15:27:32 crc kubenswrapper[4750]: I0214 15:27:32.468895 4750 generic.go:334] "Generic (PLEG): container finished" podID="92eea70a-b0e4-43b6-973c-e632c365a222" containerID="6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb" exitCode=0 Feb 14 15:27:32 crc kubenswrapper[4750]: I0214 15:27:32.468933 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjr8" event={"ID":"92eea70a-b0e4-43b6-973c-e632c365a222","Type":"ContainerDied","Data":"6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb"} Feb 14 15:27:32 crc kubenswrapper[4750]: I0214 15:27:32.468958 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjr8" event={"ID":"92eea70a-b0e4-43b6-973c-e632c365a222","Type":"ContainerStarted","Data":"9ca1b78c08dc5c8f42def1a3305f28b7c200c4b16a65e00f8762b1146d6d46fd"} Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.301889 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c6p6t"] Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.304615 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.320757 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6p6t"] Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.362574 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-catalog-content\") pod \"community-operators-c6p6t\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.362623 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9wm6\" (UniqueName: \"kubernetes.io/projected/bdfe00f2-3b68-4a56-b421-fca252c24ce9-kube-api-access-g9wm6\") pod \"community-operators-c6p6t\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.362665 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-utilities\") pod \"community-operators-c6p6t\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.465046 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-catalog-content\") pod \"community-operators-c6p6t\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.465094 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9wm6\" (UniqueName: \"kubernetes.io/projected/bdfe00f2-3b68-4a56-b421-fca252c24ce9-kube-api-access-g9wm6\") pod \"community-operators-c6p6t\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.465158 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-utilities\") pod \"community-operators-c6p6t\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.465599 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-catalog-content\") pod \"community-operators-c6p6t\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.465635 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-utilities\") pod \"community-operators-c6p6t\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.489419 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpp" event={"ID":"fd42577d-6a4a-42c3-b824-947e05d0da59","Type":"ContainerStarted","Data":"849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43"} Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.497865 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9wm6\" (UniqueName: \"kubernetes.io/projected/bdfe00f2-3b68-4a56-b421-fca252c24ce9-kube-api-access-g9wm6\") pod \"community-operators-c6p6t\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.503429 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjr8" event={"ID":"92eea70a-b0e4-43b6-973c-e632c365a222","Type":"ContainerStarted","Data":"d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704"} Feb 14 15:27:33 crc kubenswrapper[4750]: I0214 15:27:33.627716 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:34 crc kubenswrapper[4750]: W0214 15:27:34.121845 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdfe00f2_3b68_4a56_b421_fca252c24ce9.slice/crio-39dc2d6eb95669648fecf1b6bd4eedabce9229ca680144710eb39b9d5491d7e4 WatchSource:0}: Error finding container 39dc2d6eb95669648fecf1b6bd4eedabce9229ca680144710eb39b9d5491d7e4: Status 404 returned error can't find the container with id 39dc2d6eb95669648fecf1b6bd4eedabce9229ca680144710eb39b9d5491d7e4 Feb 14 15:27:34 crc kubenswrapper[4750]: I0214 15:27:34.123790 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6p6t"] Feb 14 15:27:34 crc kubenswrapper[4750]: I0214 15:27:34.518177 4750 generic.go:334] "Generic (PLEG): container finished" podID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerID="b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52" exitCode=0 Feb 14 15:27:34 crc kubenswrapper[4750]: I0214 15:27:34.520617 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6p6t" event={"ID":"bdfe00f2-3b68-4a56-b421-fca252c24ce9","Type":"ContainerDied","Data":"b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52"} Feb 14 15:27:34 crc kubenswrapper[4750]: I0214 15:27:34.520665 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6p6t" event={"ID":"bdfe00f2-3b68-4a56-b421-fca252c24ce9","Type":"ContainerStarted","Data":"39dc2d6eb95669648fecf1b6bd4eedabce9229ca680144710eb39b9d5491d7e4"} Feb 14 15:27:35 crc kubenswrapper[4750]: I0214 15:27:35.529469 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6p6t" event={"ID":"bdfe00f2-3b68-4a56-b421-fca252c24ce9","Type":"ContainerStarted","Data":"5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a"} Feb 14 15:27:35 crc kubenswrapper[4750]: I0214 15:27:35.533607 4750 generic.go:334] "Generic (PLEG): container finished" podID="92eea70a-b0e4-43b6-973c-e632c365a222" containerID="d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704" exitCode=0 Feb 14 15:27:35 crc kubenswrapper[4750]: I0214 15:27:35.533648 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjr8" event={"ID":"92eea70a-b0e4-43b6-973c-e632c365a222","Type":"ContainerDied","Data":"d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704"} Feb 14 15:27:36 crc kubenswrapper[4750]: I0214 15:27:36.550730 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjr8" event={"ID":"92eea70a-b0e4-43b6-973c-e632c365a222","Type":"ContainerStarted","Data":"2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc"} Feb 14 15:27:36 crc kubenswrapper[4750]: I0214 15:27:36.562147 4750 generic.go:334] "Generic (PLEG): container finished" podID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerID="849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43" exitCode=0 Feb 14 15:27:36 crc kubenswrapper[4750]: I0214 15:27:36.562225 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpp" event={"ID":"fd42577d-6a4a-42c3-b824-947e05d0da59","Type":"ContainerDied","Data":"849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43"} Feb 14 15:27:36 crc kubenswrapper[4750]: I0214 15:27:36.573724 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cnjr8" podStartSLOduration=3.0334558 podStartE2EDuration="6.573707417s" podCreationTimestamp="2026-02-14 15:27:30 +0000 UTC" firstStartedPulling="2026-02-14 15:27:32.470905808 +0000 UTC m=+5724.496895289" lastFinishedPulling="2026-02-14 15:27:36.011157425 +0000 UTC m=+5728.037146906" observedRunningTime="2026-02-14 15:27:36.569725524 +0000 UTC m=+5728.595715005" watchObservedRunningTime="2026-02-14 15:27:36.573707417 +0000 UTC m=+5728.599696888" Feb 14 15:27:37 crc kubenswrapper[4750]: I0214 15:27:37.575107 4750 generic.go:334] "Generic (PLEG): container finished" podID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerID="5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a" exitCode=0 Feb 14 15:27:37 crc kubenswrapper[4750]: I0214 15:27:37.575156 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6p6t" event={"ID":"bdfe00f2-3b68-4a56-b421-fca252c24ce9","Type":"ContainerDied","Data":"5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a"} Feb 14 15:27:37 crc kubenswrapper[4750]: I0214 15:27:37.580628 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpp" event={"ID":"fd42577d-6a4a-42c3-b824-947e05d0da59","Type":"ContainerStarted","Data":"22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55"} Feb 14 15:27:37 crc kubenswrapper[4750]: I0214 15:27:37.622956 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x5wpp" podStartSLOduration=3.079158949 podStartE2EDuration="7.622928846s" podCreationTimestamp="2026-02-14 15:27:30 +0000 UTC" firstStartedPulling="2026-02-14 15:27:32.465360971 +0000 UTC m=+5724.491350452" lastFinishedPulling="2026-02-14 15:27:37.009130878 +0000 UTC m=+5729.035120349" observedRunningTime="2026-02-14 15:27:37.608427674 +0000 UTC m=+5729.634417155" watchObservedRunningTime="2026-02-14 15:27:37.622928846 +0000 UTC m=+5729.648918327" Feb 14 15:27:38 crc kubenswrapper[4750]: I0214 15:27:38.595037 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6p6t" event={"ID":"bdfe00f2-3b68-4a56-b421-fca252c24ce9","Type":"ContainerStarted","Data":"9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29"} Feb 14 15:27:38 crc kubenswrapper[4750]: I0214 15:27:38.618928 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c6p6t" podStartSLOduration=2.116476227 podStartE2EDuration="5.61891235s" podCreationTimestamp="2026-02-14 15:27:33 +0000 UTC" firstStartedPulling="2026-02-14 15:27:34.530931153 +0000 UTC m=+5726.556920634" lastFinishedPulling="2026-02-14 15:27:38.033367276 +0000 UTC m=+5730.059356757" observedRunningTime="2026-02-14 15:27:38.614456684 +0000 UTC m=+5730.640446165" watchObservedRunningTime="2026-02-14 15:27:38.61891235 +0000 UTC m=+5730.644901831" Feb 14 15:27:40 crc kubenswrapper[4750]: I0214 15:27:40.630595 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:40 crc kubenswrapper[4750]: I0214 15:27:40.631054 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:41 crc kubenswrapper[4750]: I0214 15:27:41.265776 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:41 crc kubenswrapper[4750]: I0214 15:27:41.266051 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:41 crc kubenswrapper[4750]: I0214 15:27:41.684453 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-x5wpp" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerName="registry-server" probeResult="failure" output=< Feb 14 15:27:41 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:27:41 crc kubenswrapper[4750]: > Feb 14 15:27:42 crc kubenswrapper[4750]: I0214 15:27:42.724382 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cnjr8" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" containerName="registry-server" probeResult="failure" output=< Feb 14 15:27:42 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:27:42 crc kubenswrapper[4750]: > Feb 14 15:27:43 crc kubenswrapper[4750]: I0214 15:27:43.628072 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:43 crc kubenswrapper[4750]: I0214 15:27:43.628370 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:44 crc kubenswrapper[4750]: I0214 15:27:44.686776 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c6p6t" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerName="registry-server" probeResult="failure" output=< Feb 14 15:27:44 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:27:44 crc kubenswrapper[4750]: > Feb 14 15:27:46 crc kubenswrapper[4750]: I0214 15:27:46.742755 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:27:46 crc kubenswrapper[4750]: E0214 15:27:46.743756 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:27:50 crc kubenswrapper[4750]: I0214 15:27:50.684899 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:50 crc kubenswrapper[4750]: I0214 15:27:50.730523 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:50 crc kubenswrapper[4750]: I0214 15:27:50.921151 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5wpp"] Feb 14 15:27:51 crc kubenswrapper[4750]: I0214 15:27:51.321185 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:51 crc kubenswrapper[4750]: I0214 15:27:51.370612 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:51 crc kubenswrapper[4750]: I0214 15:27:51.744484 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x5wpp" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerName="registry-server" containerID="cri-o://22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55" gracePeriod=2 Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.433979 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.510916 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-utilities\") pod \"fd42577d-6a4a-42c3-b824-947e05d0da59\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.511075 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6tvt\" (UniqueName: \"kubernetes.io/projected/fd42577d-6a4a-42c3-b824-947e05d0da59-kube-api-access-c6tvt\") pod \"fd42577d-6a4a-42c3-b824-947e05d0da59\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.511450 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-catalog-content\") pod \"fd42577d-6a4a-42c3-b824-947e05d0da59\" (UID: \"fd42577d-6a4a-42c3-b824-947e05d0da59\") " Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.511832 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-utilities" (OuterVolumeSpecName: "utilities") pod "fd42577d-6a4a-42c3-b824-947e05d0da59" (UID: "fd42577d-6a4a-42c3-b824-947e05d0da59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.512382 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.517350 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd42577d-6a4a-42c3-b824-947e05d0da59-kube-api-access-c6tvt" (OuterVolumeSpecName: "kube-api-access-c6tvt") pod "fd42577d-6a4a-42c3-b824-947e05d0da59" (UID: "fd42577d-6a4a-42c3-b824-947e05d0da59"). InnerVolumeSpecName "kube-api-access-c6tvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.561015 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd42577d-6a4a-42c3-b824-947e05d0da59" (UID: "fd42577d-6a4a-42c3-b824-947e05d0da59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.614934 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd42577d-6a4a-42c3-b824-947e05d0da59-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.614964 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6tvt\" (UniqueName: \"kubernetes.io/projected/fd42577d-6a4a-42c3-b824-947e05d0da59-kube-api-access-c6tvt\") on node \"crc\" DevicePath \"\"" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.756345 4750 generic.go:334] "Generic (PLEG): container finished" podID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerID="22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55" exitCode=0 Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.756382 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpp" event={"ID":"fd42577d-6a4a-42c3-b824-947e05d0da59","Type":"ContainerDied","Data":"22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55"} Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.756403 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5wpp" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.756415 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpp" event={"ID":"fd42577d-6a4a-42c3-b824-947e05d0da59","Type":"ContainerDied","Data":"e2f91bc29e0182a461cc2faed6dba3cf16585115013fe0abd7a8b626b7483be8"} Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.756434 4750 scope.go:117] "RemoveContainer" containerID="22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.793994 4750 scope.go:117] "RemoveContainer" containerID="849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.802616 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5wpp"] Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.817527 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x5wpp"] Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.818848 4750 scope.go:117] "RemoveContainer" containerID="7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.875876 4750 scope.go:117] "RemoveContainer" containerID="22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55" Feb 14 15:27:52 crc kubenswrapper[4750]: E0214 15:27:52.887166 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55\": container with ID starting with 22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55 not found: ID does not exist" containerID="22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.887239 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55"} err="failed to get container status \"22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55\": rpc error: code = NotFound desc = could not find container \"22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55\": container with ID starting with 22a33d1fd9bd0d5385688f9bfef678481851378c1e3f21ef9f4a1b0acc808b55 not found: ID does not exist" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.887274 4750 scope.go:117] "RemoveContainer" containerID="849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43" Feb 14 15:27:52 crc kubenswrapper[4750]: E0214 15:27:52.887673 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43\": container with ID starting with 849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43 not found: ID does not exist" containerID="849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.887698 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43"} err="failed to get container status \"849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43\": rpc error: code = NotFound desc = could not find container \"849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43\": container with ID starting with 849d6f4dfe6775dbe1c056be24c39bfc76a7bcc9de2076bda1e5e417afd74d43 not found: ID does not exist" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.887712 4750 scope.go:117] "RemoveContainer" containerID="7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753" Feb 14 15:27:52 crc kubenswrapper[4750]: E0214 15:27:52.887923 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753\": container with ID starting with 7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753 not found: ID does not exist" containerID="7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753" Feb 14 15:27:52 crc kubenswrapper[4750]: I0214 15:27:52.887940 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753"} err="failed to get container status \"7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753\": rpc error: code = NotFound desc = could not find container \"7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753\": container with ID starting with 7451e4d2de2f5bb649e45e2c4722ab635da234f9defd4348284959bf8f434753 not found: ID does not exist" Feb 14 15:27:53 crc kubenswrapper[4750]: I0214 15:27:53.682551 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:53 crc kubenswrapper[4750]: I0214 15:27:53.741593 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnjr8"] Feb 14 15:27:53 crc kubenswrapper[4750]: I0214 15:27:53.742030 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cnjr8" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" containerName="registry-server" containerID="cri-o://2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc" gracePeriod=2 Feb 14 15:27:53 crc kubenswrapper[4750]: I0214 15:27:53.753702 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.298671 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.352706 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-utilities\") pod \"92eea70a-b0e4-43b6-973c-e632c365a222\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.352819 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-catalog-content\") pod \"92eea70a-b0e4-43b6-973c-e632c365a222\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.353004 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpkw9\" (UniqueName: \"kubernetes.io/projected/92eea70a-b0e4-43b6-973c-e632c365a222-kube-api-access-lpkw9\") pod \"92eea70a-b0e4-43b6-973c-e632c365a222\" (UID: \"92eea70a-b0e4-43b6-973c-e632c365a222\") " Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.353546 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-utilities" (OuterVolumeSpecName: "utilities") pod "92eea70a-b0e4-43b6-973c-e632c365a222" (UID: "92eea70a-b0e4-43b6-973c-e632c365a222"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.354618 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.358777 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92eea70a-b0e4-43b6-973c-e632c365a222-kube-api-access-lpkw9" (OuterVolumeSpecName: "kube-api-access-lpkw9") pod "92eea70a-b0e4-43b6-973c-e632c365a222" (UID: "92eea70a-b0e4-43b6-973c-e632c365a222"). InnerVolumeSpecName "kube-api-access-lpkw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.381925 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92eea70a-b0e4-43b6-973c-e632c365a222" (UID: "92eea70a-b0e4-43b6-973c-e632c365a222"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.456573 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eea70a-b0e4-43b6-973c-e632c365a222-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.456835 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpkw9\" (UniqueName: \"kubernetes.io/projected/92eea70a-b0e4-43b6-973c-e632c365a222-kube-api-access-lpkw9\") on node \"crc\" DevicePath \"\"" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.760264 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" path="/var/lib/kubelet/pods/fd42577d-6a4a-42c3-b824-947e05d0da59/volumes" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.781018 4750 generic.go:334] "Generic (PLEG): container finished" podID="92eea70a-b0e4-43b6-973c-e632c365a222" containerID="2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc" exitCode=0 Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.781055 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjr8" event={"ID":"92eea70a-b0e4-43b6-973c-e632c365a222","Type":"ContainerDied","Data":"2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc"} Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.781080 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjr8" event={"ID":"92eea70a-b0e4-43b6-973c-e632c365a222","Type":"ContainerDied","Data":"9ca1b78c08dc5c8f42def1a3305f28b7c200c4b16a65e00f8762b1146d6d46fd"} Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.781087 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnjr8" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.781096 4750 scope.go:117] "RemoveContainer" containerID="2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.809303 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnjr8"] Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.812479 4750 scope.go:117] "RemoveContainer" containerID="d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.821780 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnjr8"] Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.837891 4750 scope.go:117] "RemoveContainer" containerID="6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.886672 4750 scope.go:117] "RemoveContainer" containerID="2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc" Feb 14 15:27:54 crc kubenswrapper[4750]: E0214 15:27:54.887558 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc\": container with ID starting with 2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc not found: ID does not exist" containerID="2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.887590 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc"} err="failed to get container status \"2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc\": rpc error: code = NotFound desc = could not find container \"2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc\": container with ID starting with 2a84589b5b0b4def4d15d4bd1fe1de227246682c803aeae1d73e441315fb50bc not found: ID does not exist" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.887611 4750 scope.go:117] "RemoveContainer" containerID="d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704" Feb 14 15:27:54 crc kubenswrapper[4750]: E0214 15:27:54.887820 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704\": container with ID starting with d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704 not found: ID does not exist" containerID="d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.887837 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704"} err="failed to get container status \"d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704\": rpc error: code = NotFound desc = could not find container \"d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704\": container with ID starting with d4f7592316b57feb8f0cfd54f91b6f9d0988a70bf171787f49265f4cedbc0704 not found: ID does not exist" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.887849 4750 scope.go:117] "RemoveContainer" containerID="6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb" Feb 14 15:27:54 crc kubenswrapper[4750]: E0214 15:27:54.888031 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb\": container with ID starting with 6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb not found: ID does not exist" containerID="6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb" Feb 14 15:27:54 crc kubenswrapper[4750]: I0214 15:27:54.888047 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb"} err="failed to get container status \"6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb\": rpc error: code = NotFound desc = could not find container \"6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb\": container with ID starting with 6e4ff3e6b1130c9d1e4f056a5727015a4c31b29e1bbfb2ad7b38128938b58ecb not found: ID does not exist" Feb 14 15:27:55 crc kubenswrapper[4750]: I0214 15:27:55.926006 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6p6t"] Feb 14 15:27:55 crc kubenswrapper[4750]: I0214 15:27:55.926533 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c6p6t" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerName="registry-server" containerID="cri-o://9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29" gracePeriod=2 Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.450317 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.610367 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-utilities\") pod \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.610424 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9wm6\" (UniqueName: \"kubernetes.io/projected/bdfe00f2-3b68-4a56-b421-fca252c24ce9-kube-api-access-g9wm6\") pod \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.610668 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-catalog-content\") pod \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\" (UID: \"bdfe00f2-3b68-4a56-b421-fca252c24ce9\") " Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.611499 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-utilities" (OuterVolumeSpecName: "utilities") pod "bdfe00f2-3b68-4a56-b421-fca252c24ce9" (UID: "bdfe00f2-3b68-4a56-b421-fca252c24ce9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.617187 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfe00f2-3b68-4a56-b421-fca252c24ce9-kube-api-access-g9wm6" (OuterVolumeSpecName: "kube-api-access-g9wm6") pod "bdfe00f2-3b68-4a56-b421-fca252c24ce9" (UID: "bdfe00f2-3b68-4a56-b421-fca252c24ce9"). InnerVolumeSpecName "kube-api-access-g9wm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.668075 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdfe00f2-3b68-4a56-b421-fca252c24ce9" (UID: "bdfe00f2-3b68-4a56-b421-fca252c24ce9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.713120 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.713159 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfe00f2-3b68-4a56-b421-fca252c24ce9-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.713173 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9wm6\" (UniqueName: \"kubernetes.io/projected/bdfe00f2-3b68-4a56-b421-fca252c24ce9-kube-api-access-g9wm6\") on node \"crc\" DevicePath \"\"" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.759386 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" path="/var/lib/kubelet/pods/92eea70a-b0e4-43b6-973c-e632c365a222/volumes" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.810078 4750 generic.go:334] "Generic (PLEG): container finished" podID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerID="9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29" exitCode=0 Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.810128 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6p6t" event={"ID":"bdfe00f2-3b68-4a56-b421-fca252c24ce9","Type":"ContainerDied","Data":"9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29"} Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.810140 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6p6t" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.810157 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6p6t" event={"ID":"bdfe00f2-3b68-4a56-b421-fca252c24ce9","Type":"ContainerDied","Data":"39dc2d6eb95669648fecf1b6bd4eedabce9229ca680144710eb39b9d5491d7e4"} Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.810174 4750 scope.go:117] "RemoveContainer" containerID="9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.832703 4750 scope.go:117] "RemoveContainer" containerID="5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.841454 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6p6t"] Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.850635 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c6p6t"] Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.853087 4750 scope.go:117] "RemoveContainer" containerID="b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.901904 4750 scope.go:117] "RemoveContainer" containerID="9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29" Feb 14 15:27:56 crc kubenswrapper[4750]: E0214 15:27:56.902358 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29\": container with ID starting with 9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29 not found: ID does not exist" containerID="9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.902386 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29"} err="failed to get container status \"9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29\": rpc error: code = NotFound desc = could not find container \"9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29\": container with ID starting with 9678409b94ff6b63576a3208ae41f6e4bcc3c95b81f978ed71008cc7921c0e29 not found: ID does not exist" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.902409 4750 scope.go:117] "RemoveContainer" containerID="5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a" Feb 14 15:27:56 crc kubenswrapper[4750]: E0214 15:27:56.902663 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a\": container with ID starting with 5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a not found: ID does not exist" containerID="5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.902680 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a"} err="failed to get container status \"5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a\": rpc error: code = NotFound desc = could not find container \"5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a\": container with ID starting with 5bdf25eba9368bacf04065a7f55b74573ef90780484fe013b7ca701adefe1f5a not found: ID does not exist" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.902692 4750 scope.go:117] "RemoveContainer" containerID="b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52" Feb 14 15:27:56 crc kubenswrapper[4750]: E0214 15:27:56.902937 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52\": container with ID starting with b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52 not found: ID does not exist" containerID="b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52" Feb 14 15:27:56 crc kubenswrapper[4750]: I0214 15:27:56.902982 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52"} err="failed to get container status \"b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52\": rpc error: code = NotFound desc = could not find container \"b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52\": container with ID starting with b2d3363092c089c4b56228fbe18e0df08bf58674f50d9d5163a8cc24e154ed52 not found: ID does not exist" Feb 14 15:27:58 crc kubenswrapper[4750]: I0214 15:27:58.755471 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:27:58 crc kubenswrapper[4750]: I0214 15:27:58.755926 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" path="/var/lib/kubelet/pods/bdfe00f2-3b68-4a56-b421-fca252c24ce9/volumes" Feb 14 15:27:58 crc kubenswrapper[4750]: E0214 15:27:58.757105 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:28:01 crc kubenswrapper[4750]: I0214 15:28:01.866784 4750 generic.go:334] "Generic (PLEG): container finished" podID="e0e4c397-d891-49d6-9b55-8120044646b1" containerID="d44f994654a9535a9d48dec9bd3899d9aa50440713e46fdb5e4dc25157798c2a" exitCode=0 Feb 14 15:28:01 crc kubenswrapper[4750]: I0214 15:28:01.866892 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" event={"ID":"e0e4c397-d891-49d6-9b55-8120044646b1","Type":"ContainerDied","Data":"d44f994654a9535a9d48dec9bd3899d9aa50440713e46fdb5e4dc25157798c2a"} Feb 14 15:28:02 crc kubenswrapper[4750]: I0214 15:28:02.995535 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.034530 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2q8d6/crc-debug-zgqkv"] Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.044695 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2q8d6/crc-debug-zgqkv"] Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.177984 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58vhc\" (UniqueName: \"kubernetes.io/projected/e0e4c397-d891-49d6-9b55-8120044646b1-kube-api-access-58vhc\") pod \"e0e4c397-d891-49d6-9b55-8120044646b1\" (UID: \"e0e4c397-d891-49d6-9b55-8120044646b1\") " Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.178072 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e4c397-d891-49d6-9b55-8120044646b1-host\") pod \"e0e4c397-d891-49d6-9b55-8120044646b1\" (UID: \"e0e4c397-d891-49d6-9b55-8120044646b1\") " Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.178894 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e4c397-d891-49d6-9b55-8120044646b1-host" (OuterVolumeSpecName: "host") pod "e0e4c397-d891-49d6-9b55-8120044646b1" (UID: "e0e4c397-d891-49d6-9b55-8120044646b1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.195294 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e4c397-d891-49d6-9b55-8120044646b1-kube-api-access-58vhc" (OuterVolumeSpecName: "kube-api-access-58vhc") pod "e0e4c397-d891-49d6-9b55-8120044646b1" (UID: "e0e4c397-d891-49d6-9b55-8120044646b1"). InnerVolumeSpecName "kube-api-access-58vhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.281150 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58vhc\" (UniqueName: \"kubernetes.io/projected/e0e4c397-d891-49d6-9b55-8120044646b1-kube-api-access-58vhc\") on node \"crc\" DevicePath \"\"" Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.281182 4750 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e4c397-d891-49d6-9b55-8120044646b1-host\") on node \"crc\" DevicePath \"\"" Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.891055 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4789065eab88620fde2ef26c35299f6e9299d9a8e677a198abecc8335c4267b" Feb 14 15:28:03 crc kubenswrapper[4750]: I0214 15:28:03.891537 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-zgqkv" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.273846 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2q8d6/crc-debug-xv97c"] Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274445 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerName="extract-utilities" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274464 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerName="extract-utilities" Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274492 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerName="extract-content" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274500 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerName="extract-content" Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274515 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerName="registry-server" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274523 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerName="registry-server" Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274536 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerName="extract-content" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274544 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerName="extract-content" Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274571 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" containerName="extract-utilities" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274580 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" containerName="extract-utilities" Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274591 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e4c397-d891-49d6-9b55-8120044646b1" containerName="container-00" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274599 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e4c397-d891-49d6-9b55-8120044646b1" containerName="container-00" Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274627 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" containerName="registry-server" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274635 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" containerName="registry-server" Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274649 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" containerName="extract-content" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274657 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" containerName="extract-content" Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274670 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerName="registry-server" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274678 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerName="registry-server" Feb 14 15:28:04 crc kubenswrapper[4750]: E0214 15:28:04.274694 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerName="extract-utilities" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274702 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerName="extract-utilities" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.274986 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfe00f2-3b68-4a56-b421-fca252c24ce9" containerName="registry-server" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.275034 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="92eea70a-b0e4-43b6-973c-e632c365a222" containerName="registry-server" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.275058 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd42577d-6a4a-42c3-b824-947e05d0da59" containerName="registry-server" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.275080 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e4c397-d891-49d6-9b55-8120044646b1" containerName="container-00" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.276481 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.412350 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71f0578b-e42f-4c02-8535-be0b347b8868-host\") pod \"crc-debug-xv97c\" (UID: \"71f0578b-e42f-4c02-8535-be0b347b8868\") " pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.412438 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpmdg\" (UniqueName: \"kubernetes.io/projected/71f0578b-e42f-4c02-8535-be0b347b8868-kube-api-access-tpmdg\") pod \"crc-debug-xv97c\" (UID: \"71f0578b-e42f-4c02-8535-be0b347b8868\") " pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.515320 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71f0578b-e42f-4c02-8535-be0b347b8868-host\") pod \"crc-debug-xv97c\" (UID: \"71f0578b-e42f-4c02-8535-be0b347b8868\") " pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.515375 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpmdg\" (UniqueName: \"kubernetes.io/projected/71f0578b-e42f-4c02-8535-be0b347b8868-kube-api-access-tpmdg\") pod \"crc-debug-xv97c\" (UID: \"71f0578b-e42f-4c02-8535-be0b347b8868\") " pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.515464 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71f0578b-e42f-4c02-8535-be0b347b8868-host\") pod \"crc-debug-xv97c\" (UID: \"71f0578b-e42f-4c02-8535-be0b347b8868\") " pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.547732 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpmdg\" (UniqueName: \"kubernetes.io/projected/71f0578b-e42f-4c02-8535-be0b347b8868-kube-api-access-tpmdg\") pod \"crc-debug-xv97c\" (UID: \"71f0578b-e42f-4c02-8535-be0b347b8868\") " pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.610572 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.759160 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e4c397-d891-49d6-9b55-8120044646b1" path="/var/lib/kubelet/pods/e0e4c397-d891-49d6-9b55-8120044646b1/volumes" Feb 14 15:28:04 crc kubenswrapper[4750]: I0214 15:28:04.901901 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/crc-debug-xv97c" event={"ID":"71f0578b-e42f-4c02-8535-be0b347b8868","Type":"ContainerStarted","Data":"f47579024ecfd5a5a542cb5e9386fef346ebcae23a1d346f2c690f74a01e60e4"} Feb 14 15:28:05 crc kubenswrapper[4750]: I0214 15:28:05.913985 4750 generic.go:334] "Generic (PLEG): container finished" podID="71f0578b-e42f-4c02-8535-be0b347b8868" containerID="642ab56613c6b8feceb8f2f7e593aa500ff8b49d6d54bbf3b2aa026d800bb255" exitCode=0 Feb 14 15:28:05 crc kubenswrapper[4750]: I0214 15:28:05.914048 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/crc-debug-xv97c" event={"ID":"71f0578b-e42f-4c02-8535-be0b347b8868","Type":"ContainerDied","Data":"642ab56613c6b8feceb8f2f7e593aa500ff8b49d6d54bbf3b2aa026d800bb255"} Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.075366 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.200456 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpmdg\" (UniqueName: \"kubernetes.io/projected/71f0578b-e42f-4c02-8535-be0b347b8868-kube-api-access-tpmdg\") pod \"71f0578b-e42f-4c02-8535-be0b347b8868\" (UID: \"71f0578b-e42f-4c02-8535-be0b347b8868\") " Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.200737 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71f0578b-e42f-4c02-8535-be0b347b8868-host\") pod \"71f0578b-e42f-4c02-8535-be0b347b8868\" (UID: \"71f0578b-e42f-4c02-8535-be0b347b8868\") " Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.200826 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f0578b-e42f-4c02-8535-be0b347b8868-host" (OuterVolumeSpecName: "host") pod "71f0578b-e42f-4c02-8535-be0b347b8868" (UID: "71f0578b-e42f-4c02-8535-be0b347b8868"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.202031 4750 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71f0578b-e42f-4c02-8535-be0b347b8868-host\") on node \"crc\" DevicePath \"\"" Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.210426 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f0578b-e42f-4c02-8535-be0b347b8868-kube-api-access-tpmdg" (OuterVolumeSpecName: "kube-api-access-tpmdg") pod "71f0578b-e42f-4c02-8535-be0b347b8868" (UID: "71f0578b-e42f-4c02-8535-be0b347b8868"). InnerVolumeSpecName "kube-api-access-tpmdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.309467 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpmdg\" (UniqueName: \"kubernetes.io/projected/71f0578b-e42f-4c02-8535-be0b347b8868-kube-api-access-tpmdg\") on node \"crc\" DevicePath \"\"" Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.942291 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-xv97c" Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.942300 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/crc-debug-xv97c" event={"ID":"71f0578b-e42f-4c02-8535-be0b347b8868","Type":"ContainerDied","Data":"f47579024ecfd5a5a542cb5e9386fef346ebcae23a1d346f2c690f74a01e60e4"} Feb 14 15:28:07 crc kubenswrapper[4750]: I0214 15:28:07.943278 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f47579024ecfd5a5a542cb5e9386fef346ebcae23a1d346f2c690f74a01e60e4" Feb 14 15:28:08 crc kubenswrapper[4750]: I0214 15:28:08.262745 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2q8d6/crc-debug-xv97c"] Feb 14 15:28:08 crc kubenswrapper[4750]: I0214 15:28:08.272692 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2q8d6/crc-debug-xv97c"] Feb 14 15:28:08 crc kubenswrapper[4750]: I0214 15:28:08.764879 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71f0578b-e42f-4c02-8535-be0b347b8868" path="/var/lib/kubelet/pods/71f0578b-e42f-4c02-8535-be0b347b8868/volumes" Feb 14 15:28:09 crc kubenswrapper[4750]: I0214 15:28:09.741867 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:28:09 crc kubenswrapper[4750]: E0214 15:28:09.742357 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:28:09 crc kubenswrapper[4750]: I0214 15:28:09.903781 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2q8d6/crc-debug-2xnzs"] Feb 14 15:28:09 crc kubenswrapper[4750]: E0214 15:28:09.904718 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71f0578b-e42f-4c02-8535-be0b347b8868" containerName="container-00" Feb 14 15:28:09 crc kubenswrapper[4750]: I0214 15:28:09.904739 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="71f0578b-e42f-4c02-8535-be0b347b8868" containerName="container-00" Feb 14 15:28:09 crc kubenswrapper[4750]: I0214 15:28:09.905037 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="71f0578b-e42f-4c02-8535-be0b347b8868" containerName="container-00" Feb 14 15:28:09 crc kubenswrapper[4750]: I0214 15:28:09.906014 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.068223 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c0d1a13-bea8-459a-a578-7242dcc18e14-host\") pod \"crc-debug-2xnzs\" (UID: \"1c0d1a13-bea8-459a-a578-7242dcc18e14\") " pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.068569 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs55j\" (UniqueName: \"kubernetes.io/projected/1c0d1a13-bea8-459a-a578-7242dcc18e14-kube-api-access-fs55j\") pod \"crc-debug-2xnzs\" (UID: \"1c0d1a13-bea8-459a-a578-7242dcc18e14\") " pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.171062 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c0d1a13-bea8-459a-a578-7242dcc18e14-host\") pod \"crc-debug-2xnzs\" (UID: \"1c0d1a13-bea8-459a-a578-7242dcc18e14\") " pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.171167 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs55j\" (UniqueName: \"kubernetes.io/projected/1c0d1a13-bea8-459a-a578-7242dcc18e14-kube-api-access-fs55j\") pod \"crc-debug-2xnzs\" (UID: \"1c0d1a13-bea8-459a-a578-7242dcc18e14\") " pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.171241 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c0d1a13-bea8-459a-a578-7242dcc18e14-host\") pod \"crc-debug-2xnzs\" (UID: \"1c0d1a13-bea8-459a-a578-7242dcc18e14\") " pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.191269 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs55j\" (UniqueName: \"kubernetes.io/projected/1c0d1a13-bea8-459a-a578-7242dcc18e14-kube-api-access-fs55j\") pod \"crc-debug-2xnzs\" (UID: \"1c0d1a13-bea8-459a-a578-7242dcc18e14\") " pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.223235 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.979079 4750 generic.go:334] "Generic (PLEG): container finished" podID="1c0d1a13-bea8-459a-a578-7242dcc18e14" containerID="e446d3246bdd89cf059522c0dc799b78b90b4b42e66f30154e67c484bf27fdbd" exitCode=0 Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.979151 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" event={"ID":"1c0d1a13-bea8-459a-a578-7242dcc18e14","Type":"ContainerDied","Data":"e446d3246bdd89cf059522c0dc799b78b90b4b42e66f30154e67c484bf27fdbd"} Feb 14 15:28:10 crc kubenswrapper[4750]: I0214 15:28:10.979675 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" event={"ID":"1c0d1a13-bea8-459a-a578-7242dcc18e14","Type":"ContainerStarted","Data":"ebe3b7c704bd62beeb6281d93b3a9245b6477b791f8ccd16d165de89a8c195e1"} Feb 14 15:28:11 crc kubenswrapper[4750]: I0214 15:28:11.022839 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2q8d6/crc-debug-2xnzs"] Feb 14 15:28:11 crc kubenswrapper[4750]: I0214 15:28:11.035638 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2q8d6/crc-debug-2xnzs"] Feb 14 15:28:12 crc kubenswrapper[4750]: I0214 15:28:12.125132 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:12 crc kubenswrapper[4750]: I0214 15:28:12.218528 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs55j\" (UniqueName: \"kubernetes.io/projected/1c0d1a13-bea8-459a-a578-7242dcc18e14-kube-api-access-fs55j\") pod \"1c0d1a13-bea8-459a-a578-7242dcc18e14\" (UID: \"1c0d1a13-bea8-459a-a578-7242dcc18e14\") " Feb 14 15:28:12 crc kubenswrapper[4750]: I0214 15:28:12.219070 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c0d1a13-bea8-459a-a578-7242dcc18e14-host\") pod \"1c0d1a13-bea8-459a-a578-7242dcc18e14\" (UID: \"1c0d1a13-bea8-459a-a578-7242dcc18e14\") " Feb 14 15:28:12 crc kubenswrapper[4750]: I0214 15:28:12.219122 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c0d1a13-bea8-459a-a578-7242dcc18e14-host" (OuterVolumeSpecName: "host") pod "1c0d1a13-bea8-459a-a578-7242dcc18e14" (UID: "1c0d1a13-bea8-459a-a578-7242dcc18e14"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 15:28:12 crc kubenswrapper[4750]: I0214 15:28:12.219646 4750 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c0d1a13-bea8-459a-a578-7242dcc18e14-host\") on node \"crc\" DevicePath \"\"" Feb 14 15:28:12 crc kubenswrapper[4750]: I0214 15:28:12.223873 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0d1a13-bea8-459a-a578-7242dcc18e14-kube-api-access-fs55j" (OuterVolumeSpecName: "kube-api-access-fs55j") pod "1c0d1a13-bea8-459a-a578-7242dcc18e14" (UID: "1c0d1a13-bea8-459a-a578-7242dcc18e14"). InnerVolumeSpecName "kube-api-access-fs55j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:28:12 crc kubenswrapper[4750]: I0214 15:28:12.322142 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs55j\" (UniqueName: \"kubernetes.io/projected/1c0d1a13-bea8-459a-a578-7242dcc18e14-kube-api-access-fs55j\") on node \"crc\" DevicePath \"\"" Feb 14 15:28:12 crc kubenswrapper[4750]: I0214 15:28:12.755647 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0d1a13-bea8-459a-a578-7242dcc18e14" path="/var/lib/kubelet/pods/1c0d1a13-bea8-459a-a578-7242dcc18e14/volumes" Feb 14 15:28:13 crc kubenswrapper[4750]: I0214 15:28:13.002852 4750 scope.go:117] "RemoveContainer" containerID="e446d3246bdd89cf059522c0dc799b78b90b4b42e66f30154e67c484bf27fdbd" Feb 14 15:28:13 crc kubenswrapper[4750]: I0214 15:28:13.002955 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/crc-debug-2xnzs" Feb 14 15:28:20 crc kubenswrapper[4750]: I0214 15:28:20.742480 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:28:20 crc kubenswrapper[4750]: E0214 15:28:20.743197 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:28:34 crc kubenswrapper[4750]: I0214 15:28:34.744367 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:28:34 crc kubenswrapper[4750]: E0214 15:28:34.746693 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:28:44 crc kubenswrapper[4750]: I0214 15:28:44.891907 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4fbf10ca-3c0c-4779-b08f-212a47db3302/aodh-listener/0.log" Feb 14 15:28:44 crc kubenswrapper[4750]: I0214 15:28:44.899758 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4fbf10ca-3c0c-4779-b08f-212a47db3302/aodh-evaluator/0.log" Feb 14 15:28:44 crc kubenswrapper[4750]: I0214 15:28:44.938551 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4fbf10ca-3c0c-4779-b08f-212a47db3302/aodh-api/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.099711 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4fbf10ca-3c0c-4779-b08f-212a47db3302/aodh-notifier/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.132783 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66b79d5688-qxq94_ba0aa13e-484e-48c3-9326-f606f3f5d98c/barbican-api/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.172653 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66b79d5688-qxq94_ba0aa13e-484e-48c3-9326-f606f3f5d98c/barbican-api-log/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.311904 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fdccf6dd6-ftmgl_cd37e6f7-6e18-4587-8237-234b4d5cf12a/barbican-keystone-listener/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.453340 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fdccf6dd6-ftmgl_cd37e6f7-6e18-4587-8237-234b4d5cf12a/barbican-keystone-listener-log/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.530051 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77f9f44bff-kmkdt_35b85269-938a-4bc4-8321-f11d72214b39/barbican-worker/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.599565 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77f9f44bff-kmkdt_35b85269-938a-4bc4-8321-f11d72214b39/barbican-worker-log/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.684162 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq_35713168-58fa-49ee-8783-2631f53b02a9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.742573 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:28:45 crc kubenswrapper[4750]: E0214 15:28:45.742843 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.874647 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b67f7611-5ec3-4e68-86cf-52b26c4e3b1f/ceilometer-central-agent/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.916032 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b67f7611-5ec3-4e68-86cf-52b26c4e3b1f/proxy-httpd/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.932084 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b67f7611-5ec3-4e68-86cf-52b26c4e3b1f/ceilometer-notification-agent/0.log" Feb 14 15:28:45 crc kubenswrapper[4750]: I0214 15:28:45.975449 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b67f7611-5ec3-4e68-86cf-52b26c4e3b1f/sg-core/0.log" Feb 14 15:28:46 crc kubenswrapper[4750]: I0214 15:28:46.108765 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ecffb890-5905-4fc1-a005-86519c0c6aea/cinder-api-log/0.log" Feb 14 15:28:46 crc kubenswrapper[4750]: I0214 15:28:46.199644 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ecffb890-5905-4fc1-a005-86519c0c6aea/cinder-api/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.064337 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_41bd633c-6afd-4c10-a933-287724b60a3d/cinder-scheduler/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.118274 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_41bd633c-6afd-4c10-a933-287724b60a3d/probe/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.147837 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-r86mv_bce32f17-aa17-4e19-bc8f-05b9f58cf140/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.367376 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-69zmg_afa7c9b6-0f84-49e3-9e1d-667b2ff99d34/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.403607 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l7bxg_1d5fe023-9111-4eb5-af17-2efdd4b3a354/init/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.588405 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l7bxg_1d5fe023-9111-4eb5-af17-2efdd4b3a354/init/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.609833 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8_3586119e-2daa-4e61-8c43-8e3a9c455ab5/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.670227 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l7bxg_1d5fe023-9111-4eb5-af17-2efdd4b3a354/dnsmasq-dns/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.826696 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fe85d9dd-19fc-4155-af2c-62cc62eb029c/glance-httpd/0.log" Feb 14 15:28:47 crc kubenswrapper[4750]: I0214 15:28:47.865999 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fe85d9dd-19fc-4155-af2c-62cc62eb029c/glance-log/0.log" Feb 14 15:28:48 crc kubenswrapper[4750]: I0214 15:28:48.045874 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_27ce82d4-dcbe-48fe-8b91-8704ef172bf1/glance-log/0.log" Feb 14 15:28:48 crc kubenswrapper[4750]: I0214 15:28:48.052165 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_27ce82d4-dcbe-48fe-8b91-8704ef172bf1/glance-httpd/0.log" Feb 14 15:28:48 crc kubenswrapper[4750]: I0214 15:28:48.627453 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6_a0fe9116-89eb-49c2-a659-2dfdfe1c885a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:48 crc kubenswrapper[4750]: I0214 15:28:48.703890 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-d98b7d7bf-rc97s_7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf/heat-engine/0.log" Feb 14 15:28:48 crc kubenswrapper[4750]: I0214 15:28:48.764533 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5cddcdd877-f796q_d7bb0ba4-eebd-41f7-935f-b9ba2a635618/heat-api/0.log" Feb 14 15:28:48 crc kubenswrapper[4750]: I0214 15:28:48.916424 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6fd8cd6df7-qsknx_5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5/heat-cfnapi/0.log" Feb 14 15:28:48 crc kubenswrapper[4750]: I0214 15:28:48.923316 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pqhr7_8af7ba28-4efa-4a07-9199-c3c64c043543/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:49 crc kubenswrapper[4750]: I0214 15:28:49.143130 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29518021-6v549_4718c9f5-bcf2-48a7-bd19-6a97de6ed02a/keystone-cron/0.log" Feb 14 15:28:49 crc kubenswrapper[4750]: I0214 15:28:49.293894 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_98683c54-5137-4357-be49-f22cdf9715db/kube-state-metrics/0.log" Feb 14 15:28:49 crc kubenswrapper[4750]: I0214 15:28:49.361080 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7bb6cf9d49-hj8cz_42f54e99-7974-44a5-9796-5ab9a50db818/keystone-api/0.log" Feb 14 15:28:49 crc kubenswrapper[4750]: I0214 15:28:49.407273 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hggq4_7184cd06-d52e-49d6-9a58-520b47303252/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:49 crc kubenswrapper[4750]: I0214 15:28:49.634038 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-cdxns_6f331906-e9ac-4779-b6a3-28ac233ab472/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:49 crc kubenswrapper[4750]: I0214 15:28:49.873187 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_1aaa3ab6-4225-441d-b5b9-85ec8d30ca01/mysqld-exporter/0.log" Feb 14 15:28:50 crc kubenswrapper[4750]: I0214 15:28:50.099410 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b96685565-flxmp_dbfc4c3a-8875-43db-8ca1-e829524d280f/neutron-api/0.log" Feb 14 15:28:50 crc kubenswrapper[4750]: I0214 15:28:50.102350 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b96685565-flxmp_dbfc4c3a-8875-43db-8ca1-e829524d280f/neutron-httpd/0.log" Feb 14 15:28:50 crc kubenswrapper[4750]: I0214 15:28:50.225004 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb_868da7c8-8b42-419d-9801-06c947d3333c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:50 crc kubenswrapper[4750]: I0214 15:28:50.729888 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_76bccfd8-7ab4-4daa-b272-188438293cf7/nova-cell0-conductor-conductor/0.log" Feb 14 15:28:50 crc kubenswrapper[4750]: I0214 15:28:50.936880 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8ed4f6ec-3953-48de-a051-2af04cdafeb4/nova-api-log/0.log" Feb 14 15:28:51 crc kubenswrapper[4750]: I0214 15:28:51.032906 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8e8a5550-ea14-49fa-ae9d-b38d05a23254/nova-cell1-conductor-conductor/0.log" Feb 14 15:28:51 crc kubenswrapper[4750]: I0214 15:28:51.277638 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7ccaea34-1af0-4e5e-9771-bf53272bab57/nova-cell1-novncproxy-novncproxy/0.log" Feb 14 15:28:51 crc kubenswrapper[4750]: I0214 15:28:51.306449 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-nq9nf_e65f04d6-c4d8-4999-8a87-be675256e775/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:51 crc kubenswrapper[4750]: I0214 15:28:51.384772 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8ed4f6ec-3953-48de-a051-2af04cdafeb4/nova-api-api/0.log" Feb 14 15:28:51 crc kubenswrapper[4750]: I0214 15:28:51.571852 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2b53bb3a-9e74-4713-aedc-254b6671326d/nova-metadata-log/0.log" Feb 14 15:28:51 crc kubenswrapper[4750]: I0214 15:28:51.839343 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_42db0a00-1aa6-4754-840c-f93a2b927858/mysql-bootstrap/0.log" Feb 14 15:28:51 crc kubenswrapper[4750]: I0214 15:28:51.845624 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2f7209e1-923f-4507-8103-2020a196059f/nova-scheduler-scheduler/0.log" Feb 14 15:28:52 crc kubenswrapper[4750]: I0214 15:28:52.025413 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_42db0a00-1aa6-4754-840c-f93a2b927858/mysql-bootstrap/0.log" Feb 14 15:28:52 crc kubenswrapper[4750]: I0214 15:28:52.065300 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_42db0a00-1aa6-4754-840c-f93a2b927858/galera/0.log" Feb 14 15:28:52 crc kubenswrapper[4750]: I0214 15:28:52.252900 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e9776fb-2263-407b-93a2-3f27f9e0635f/mysql-bootstrap/0.log" Feb 14 15:28:52 crc kubenswrapper[4750]: I0214 15:28:52.516053 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e9776fb-2263-407b-93a2-3f27f9e0635f/mysql-bootstrap/0.log" Feb 14 15:28:52 crc kubenswrapper[4750]: I0214 15:28:52.779782 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e9776fb-2263-407b-93a2-3f27f9e0635f/galera/0.log" Feb 14 15:28:52 crc kubenswrapper[4750]: I0214 15:28:52.882227 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5d9cc9c1-5726-4507-bc40-a27f3aee83c4/openstackclient/0.log" Feb 14 15:28:52 crc kubenswrapper[4750]: I0214 15:28:52.965979 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4dn6h_761260d8-59af-48eb-bb26-aa7523be2d9d/ovn-controller/0.log" Feb 14 15:28:53 crc kubenswrapper[4750]: I0214 15:28:53.113168 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6h75r_a7797e0a-0f7c-42a0-bbe1-1c9f525eea52/openstack-network-exporter/0.log" Feb 14 15:28:53 crc kubenswrapper[4750]: I0214 15:28:53.305269 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bpd75_bc6ad499-faf9-47ce-8df2-57c77fb7e2b5/ovsdb-server-init/0.log" Feb 14 15:28:53 crc kubenswrapper[4750]: I0214 15:28:53.505530 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bpd75_bc6ad499-faf9-47ce-8df2-57c77fb7e2b5/ovsdb-server/0.log" Feb 14 15:28:53 crc kubenswrapper[4750]: I0214 15:28:53.512466 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bpd75_bc6ad499-faf9-47ce-8df2-57c77fb7e2b5/ovsdb-server-init/0.log" Feb 14 15:28:53 crc kubenswrapper[4750]: I0214 15:28:53.514954 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bpd75_bc6ad499-faf9-47ce-8df2-57c77fb7e2b5/ovs-vswitchd/0.log" Feb 14 15:28:53 crc kubenswrapper[4750]: I0214 15:28:53.767934 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cdx5g_2d98d1d6-d8c3-4b5c-b848-afceef7706f4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:53 crc kubenswrapper[4750]: I0214 15:28:53.859316 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2b53bb3a-9e74-4713-aedc-254b6671326d/nova-metadata-metadata/0.log" Feb 14 15:28:53 crc kubenswrapper[4750]: I0214 15:28:53.969764 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_12a3b14e-20a2-4845-af37-ae9e7a6ebbc7/openstack-network-exporter/0.log" Feb 14 15:28:53 crc kubenswrapper[4750]: I0214 15:28:53.987421 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_12a3b14e-20a2-4845-af37-ae9e7a6ebbc7/ovn-northd/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.083500 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cd41b510-5787-4c7e-9e0b-22301cd49f54/openstack-network-exporter/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.187285 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cd41b510-5787-4c7e-9e0b-22301cd49f54/ovsdbserver-nb/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.295688 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_27afbd74-b285-4efa-bd3f-33cc3c46363d/openstack-network-exporter/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.351017 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_27afbd74-b285-4efa-bd3f-33cc3c46363d/ovsdbserver-sb/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.589819 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5474bc9d4d-7h6tg_c6e7913f-cd87-4593-9345-e10614cac99b/placement-api/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.657203 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5474bc9d4d-7h6tg_c6e7913f-cd87-4593-9345-e10614cac99b/placement-log/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.722377 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/init-config-reloader/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.890681 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/config-reloader/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.903596 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/prometheus/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.909071 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/init-config-reloader/0.log" Feb 14 15:28:54 crc kubenswrapper[4750]: I0214 15:28:54.974002 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/thanos-sidecar/0.log" Feb 14 15:28:55 crc kubenswrapper[4750]: I0214 15:28:55.097147 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0191954f-3b7c-4102-9784-f775fa6e08f2/setup-container/0.log" Feb 14 15:28:55 crc kubenswrapper[4750]: I0214 15:28:55.392032 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cf78a33e-a78a-4048-afa0-af9c27e4425d/setup-container/0.log" Feb 14 15:28:55 crc kubenswrapper[4750]: I0214 15:28:55.418155 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0191954f-3b7c-4102-9784-f775fa6e08f2/setup-container/0.log" Feb 14 15:28:55 crc kubenswrapper[4750]: I0214 15:28:55.460501 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0191954f-3b7c-4102-9784-f775fa6e08f2/rabbitmq/0.log" Feb 14 15:28:55 crc kubenswrapper[4750]: I0214 15:28:55.619194 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cf78a33e-a78a-4048-afa0-af9c27e4425d/setup-container/0.log" Feb 14 15:28:55 crc kubenswrapper[4750]: I0214 15:28:55.667188 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cf78a33e-a78a-4048-afa0-af9c27e4425d/rabbitmq/0.log" Feb 14 15:28:56 crc kubenswrapper[4750]: I0214 15:28:56.382448 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3f096636-5e6f-428e-8a30-9433d6ac312c/setup-container/0.log" Feb 14 15:28:56 crc kubenswrapper[4750]: I0214 15:28:56.611418 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_38d75bde-7432-41e0-860c-b2d7219e518a/setup-container/0.log" Feb 14 15:28:56 crc kubenswrapper[4750]: I0214 15:28:56.706380 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3f096636-5e6f-428e-8a30-9433d6ac312c/setup-container/0.log" Feb 14 15:28:56 crc kubenswrapper[4750]: I0214 15:28:56.839877 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3f096636-5e6f-428e-8a30-9433d6ac312c/rabbitmq/0.log" Feb 14 15:28:57 crc kubenswrapper[4750]: I0214 15:28:57.057105 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_38d75bde-7432-41e0-860c-b2d7219e518a/setup-container/0.log" Feb 14 15:28:57 crc kubenswrapper[4750]: I0214 15:28:57.120923 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_38d75bde-7432-41e0-860c-b2d7219e518a/rabbitmq/0.log" Feb 14 15:28:57 crc kubenswrapper[4750]: I0214 15:28:57.188914 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-726m4_2636faba-c74f-47a7-8a7c-eb14094ab50b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:57 crc kubenswrapper[4750]: I0214 15:28:57.388549 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl_579b6931-42c7-4a8f-9045-b9b993aa3fbd/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:57 crc kubenswrapper[4750]: I0214 15:28:57.412791 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xtjr7_17baaf13-3126-48d1-a32e-522bf2bf43ff/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:57 crc kubenswrapper[4750]: I0214 15:28:57.584762 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-g4gfd_6ca25e17-509f-40d0-94e1-83db6398669c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:57 crc kubenswrapper[4750]: I0214 15:28:57.645780 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8m7xx_483eea7a-81f0-4f0c-92d8-dc0d3f713f10/ssh-known-hosts-edpm-deployment/0.log" Feb 14 15:28:58 crc kubenswrapper[4750]: I0214 15:28:58.456770 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b6899f7-wlrpr_1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c/proxy-server/0.log" Feb 14 15:28:58 crc kubenswrapper[4750]: I0214 15:28:58.613926 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b6899f7-wlrpr_1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c/proxy-httpd/0.log" Feb 14 15:28:58 crc kubenswrapper[4750]: I0214 15:28:58.622562 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-p2fnm_e6aba344-d824-42da-996e-733b7480a2eb/swift-ring-rebalance/0.log" Feb 14 15:28:58 crc kubenswrapper[4750]: I0214 15:28:58.695002 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/account-auditor/0.log" Feb 14 15:28:58 crc kubenswrapper[4750]: I0214 15:28:58.846105 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/account-reaper/0.log" Feb 14 15:28:58 crc kubenswrapper[4750]: I0214 15:28:58.880090 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/account-replicator/0.log" Feb 14 15:28:58 crc kubenswrapper[4750]: I0214 15:28:58.892403 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/account-server/0.log" Feb 14 15:28:58 crc kubenswrapper[4750]: I0214 15:28:58.934341 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/container-auditor/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.080416 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/container-server/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.080436 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/container-replicator/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.094024 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/container-updater/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.192047 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-auditor/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.256652 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-expirer/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.353002 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-server/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.370769 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-replicator/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.395446 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-updater/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.471394 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/rsync/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.598099 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/swift-recon-cron/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.702605 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-58rdb_66d16d12-f651-4f21-9160-e22496e7e969/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:28:59 crc kubenswrapper[4750]: I0214 15:28:59.815840 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb_b5799dfb-5d7b-40e0-9187-056a19186b75/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:29:00 crc kubenswrapper[4750]: I0214 15:29:00.046627 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5/test-operator-logs-container/0.log" Feb 14 15:29:00 crc kubenswrapper[4750]: I0214 15:29:00.263599 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t_8e6d20f9-b240-426a-8769-6f07bf3f75d4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:29:00 crc kubenswrapper[4750]: I0214 15:29:00.747095 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:29:00 crc kubenswrapper[4750]: E0214 15:29:00.747433 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:29:00 crc kubenswrapper[4750]: I0214 15:29:00.829387 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_e4d753a9-5bca-4940-9aa9-72a57f4f32a2/tempest-tests-tempest-tests-runner/0.log" Feb 14 15:29:02 crc kubenswrapper[4750]: I0214 15:29:02.358791 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_2136d6a8-25e9-4eff-946e-bbc49dab0b04/memcached/0.log" Feb 14 15:29:11 crc kubenswrapper[4750]: I0214 15:29:11.742608 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:29:11 crc kubenswrapper[4750]: E0214 15:29:11.744788 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:29:24 crc kubenswrapper[4750]: I0214 15:29:24.742667 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:29:24 crc kubenswrapper[4750]: E0214 15:29:24.743693 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:29:29 crc kubenswrapper[4750]: I0214 15:29:29.038096 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/util/0.log" Feb 14 15:29:29 crc kubenswrapper[4750]: I0214 15:29:29.147995 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/util/0.log" Feb 14 15:29:29 crc kubenswrapper[4750]: I0214 15:29:29.172737 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/pull/0.log" Feb 14 15:29:29 crc kubenswrapper[4750]: I0214 15:29:29.225742 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/pull/0.log" Feb 14 15:29:29 crc kubenswrapper[4750]: I0214 15:29:29.389351 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/pull/0.log" Feb 14 15:29:29 crc kubenswrapper[4750]: I0214 15:29:29.395671 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/util/0.log" Feb 14 15:29:29 crc kubenswrapper[4750]: I0214 15:29:29.424623 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/extract/0.log" Feb 14 15:29:29 crc kubenswrapper[4750]: I0214 15:29:29.797700 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-bmlpf_ff1e2ca9-56b1-4511-b59b-14256631d65f/manager/0.log" Feb 14 15:29:30 crc kubenswrapper[4750]: I0214 15:29:30.169046 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-sh6f9_b559262e-cbcd-486e-8602-ece46ff1ed14/manager/0.log" Feb 14 15:29:30 crc kubenswrapper[4750]: I0214 15:29:30.407419 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-7s6qb_27f7394c-167e-4dda-bd08-b2d2a49d5f13/manager/0.log" Feb 14 15:29:30 crc kubenswrapper[4750]: I0214 15:29:30.576161 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-wttvx_8fbd7079-b94a-4632-bfd7-5d550d6cbe1d/manager/0.log" Feb 14 15:29:31 crc kubenswrapper[4750]: I0214 15:29:31.037995 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-lgsxc_253c171a-c8f6-47d7-9490-a91d08ecd980/manager/0.log" Feb 14 15:29:31 crc kubenswrapper[4750]: I0214 15:29:31.282775 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-vjpnq_6e2542fa-3b0a-4a09-8d18-54037ebbbdf8/manager/0.log" Feb 14 15:29:31 crc kubenswrapper[4750]: I0214 15:29:31.766742 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-gnmwc_3e79ecca-328f-4049-945b-a506ba6d56f9/manager/0.log" Feb 14 15:29:31 crc kubenswrapper[4750]: I0214 15:29:31.971852 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-5mm9s_c04784fa-abc5-4c4c-b891-6d73db5a17e1/manager/0.log" Feb 14 15:29:32 crc kubenswrapper[4750]: I0214 15:29:32.205364 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-87m54_413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54/manager/0.log" Feb 14 15:29:32 crc kubenswrapper[4750]: I0214 15:29:32.234841 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-nvfv2_ac57dc96-afbc-4c7c-bd3c-9b763974a1c9/manager/0.log" Feb 14 15:29:32 crc kubenswrapper[4750]: I0214 15:29:32.513183 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-92td7_b146fda3-a4a4-4ebd-9ac0-32016dac7650/manager/0.log" Feb 14 15:29:32 crc kubenswrapper[4750]: I0214 15:29:32.642948 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-dq8x7_782775e0-d41f-4e9d-b6e5-4640a473b64a/manager/0.log" Feb 14 15:29:32 crc kubenswrapper[4750]: I0214 15:29:32.888839 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s_64bc51de-7c3f-406e-899b-cbf5339658ea/manager/0.log" Feb 14 15:29:33 crc kubenswrapper[4750]: I0214 15:29:33.131471 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7b948d557b-gvsr9_ec0b7c77-5944-4b0e-bbd1-af0e3e14da56/operator/0.log" Feb 14 15:29:33 crc kubenswrapper[4750]: I0214 15:29:33.838340 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tm7xh_0f00286b-008c-4863-b623-9789f8fa3b7a/registry-server/0.log" Feb 14 15:29:34 crc kubenswrapper[4750]: I0214 15:29:34.118094 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-nzqxf_585cf590-6dcc-49d2-a01f-9d6fa1612328/manager/0.log" Feb 14 15:29:34 crc kubenswrapper[4750]: I0214 15:29:34.309809 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-5dvwn_32e5c795-f58a-41c2-8b75-53ef0f77bef8/manager/0.log" Feb 14 15:29:34 crc kubenswrapper[4750]: I0214 15:29:34.555325 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xfnm8_bfeed48e-2ac8-4348-8c4c-0e239bd8c568/operator/0.log" Feb 14 15:29:34 crc kubenswrapper[4750]: I0214 15:29:34.817276 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-s7nsg_cfe0761e-63fc-480a-bfe7-c8c3e78d3785/manager/0.log" Feb 14 15:29:35 crc kubenswrapper[4750]: I0214 15:29:35.367156 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bd569c557-twg4v_aba78e62-6759-445f-b8a9-9c8b36cf4a3a/manager/0.log" Feb 14 15:29:35 crc kubenswrapper[4750]: I0214 15:29:35.382855 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-zx56k_5ffa3c9f-1ef0-43df-b99c-3dd0c918f129/manager/0.log" Feb 14 15:29:35 crc kubenswrapper[4750]: I0214 15:29:35.489342 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6fdcfd45d9-rqdd9_8d3cff8a-de26-4c32-96b3-080e797d527f/manager/0.log" Feb 14 15:29:35 crc kubenswrapper[4750]: I0214 15:29:35.553855 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-xt5f9_34ecdca8-2927-432e-b770-c0c0d0b750e9/manager/0.log" Feb 14 15:29:35 crc kubenswrapper[4750]: I0214 15:29:35.700190 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-dnj8w_60e32c9b-3598-476f-85d8-7cab15748de5/manager/0.log" Feb 14 15:29:39 crc kubenswrapper[4750]: I0214 15:29:39.741588 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:29:39 crc kubenswrapper[4750]: E0214 15:29:39.742020 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:29:42 crc kubenswrapper[4750]: I0214 15:29:42.313849 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-77cn7_4b1ad62d-48a6-4228-8de2-3710bd15b7f4/manager/0.log" Feb 14 15:29:52 crc kubenswrapper[4750]: I0214 15:29:52.743378 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:29:52 crc kubenswrapper[4750]: E0214 15:29:52.744426 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:29:57 crc kubenswrapper[4750]: I0214 15:29:57.493363 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rv4bx_9d1b8a23-779e-49fe-8e23-d9c4a53117b0/control-plane-machine-set-operator/0.log" Feb 14 15:29:57 crc kubenswrapper[4750]: I0214 15:29:57.658026 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kzvww_19eb8e0f-83bc-40d2-a994-ba669171915e/kube-rbac-proxy/0.log" Feb 14 15:29:57 crc kubenswrapper[4750]: I0214 15:29:57.683424 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kzvww_19eb8e0f-83bc-40d2-a994-ba669171915e/machine-api-operator/0.log" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.192432 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4"] Feb 14 15:30:00 crc kubenswrapper[4750]: E0214 15:30:00.193638 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0d1a13-bea8-459a-a578-7242dcc18e14" containerName="container-00" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.193653 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0d1a13-bea8-459a-a578-7242dcc18e14" containerName="container-00" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.193883 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0d1a13-bea8-459a-a578-7242dcc18e14" containerName="container-00" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.194785 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.201165 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.201551 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.257850 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4"] Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.375635 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswx5\" (UniqueName: \"kubernetes.io/projected/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-kube-api-access-wswx5\") pod \"collect-profiles-29518050-hgrg4\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.375933 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-secret-volume\") pod \"collect-profiles-29518050-hgrg4\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.376456 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-config-volume\") pod \"collect-profiles-29518050-hgrg4\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.478828 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wswx5\" (UniqueName: \"kubernetes.io/projected/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-kube-api-access-wswx5\") pod \"collect-profiles-29518050-hgrg4\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.478952 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-secret-volume\") pod \"collect-profiles-29518050-hgrg4\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.479275 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-config-volume\") pod \"collect-profiles-29518050-hgrg4\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.480140 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-config-volume\") pod \"collect-profiles-29518050-hgrg4\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.500644 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-secret-volume\") pod \"collect-profiles-29518050-hgrg4\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.505292 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wswx5\" (UniqueName: \"kubernetes.io/projected/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-kube-api-access-wswx5\") pod \"collect-profiles-29518050-hgrg4\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:00 crc kubenswrapper[4750]: I0214 15:30:00.518620 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:01 crc kubenswrapper[4750]: I0214 15:30:01.152493 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4"] Feb 14 15:30:01 crc kubenswrapper[4750]: W0214 15:30:01.164378 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod881fb816_c9d1_41ec_8ad8_694cfd0ccdbf.slice/crio-f3b3bc40139163ed3b51d0254cc989d422bc56114acce3080c3df43fe07d4ba0 WatchSource:0}: Error finding container f3b3bc40139163ed3b51d0254cc989d422bc56114acce3080c3df43fe07d4ba0: Status 404 returned error can't find the container with id f3b3bc40139163ed3b51d0254cc989d422bc56114acce3080c3df43fe07d4ba0 Feb 14 15:30:02 crc kubenswrapper[4750]: I0214 15:30:02.164443 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" event={"ID":"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf","Type":"ContainerStarted","Data":"9245bdf5c1b3b08e6faa1e0b0addb96d42d37ccd22f29d9314a154a859350133"} Feb 14 15:30:02 crc kubenswrapper[4750]: I0214 15:30:02.164876 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" event={"ID":"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf","Type":"ContainerStarted","Data":"f3b3bc40139163ed3b51d0254cc989d422bc56114acce3080c3df43fe07d4ba0"} Feb 14 15:30:02 crc kubenswrapper[4750]: I0214 15:30:02.202031 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" podStartSLOduration=2.202008333 podStartE2EDuration="2.202008333s" podCreationTimestamp="2026-02-14 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 15:30:02.182589831 +0000 UTC m=+5874.208579332" watchObservedRunningTime="2026-02-14 15:30:02.202008333 +0000 UTC m=+5874.227997814" Feb 14 15:30:03 crc kubenswrapper[4750]: I0214 15:30:03.176257 4750 generic.go:334] "Generic (PLEG): container finished" podID="881fb816-c9d1-41ec-8ad8-694cfd0ccdbf" containerID="9245bdf5c1b3b08e6faa1e0b0addb96d42d37ccd22f29d9314a154a859350133" exitCode=0 Feb 14 15:30:03 crc kubenswrapper[4750]: I0214 15:30:03.176305 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" event={"ID":"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf","Type":"ContainerDied","Data":"9245bdf5c1b3b08e6faa1e0b0addb96d42d37ccd22f29d9314a154a859350133"} Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.674938 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.785244 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wswx5\" (UniqueName: \"kubernetes.io/projected/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-kube-api-access-wswx5\") pod \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.785315 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-config-volume\") pod \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.785641 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-secret-volume\") pod \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\" (UID: \"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf\") " Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.786195 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-config-volume" (OuterVolumeSpecName: "config-volume") pod "881fb816-c9d1-41ec-8ad8-694cfd0ccdbf" (UID: "881fb816-c9d1-41ec-8ad8-694cfd0ccdbf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.786579 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.794252 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-kube-api-access-wswx5" (OuterVolumeSpecName: "kube-api-access-wswx5") pod "881fb816-c9d1-41ec-8ad8-694cfd0ccdbf" (UID: "881fb816-c9d1-41ec-8ad8-694cfd0ccdbf"). InnerVolumeSpecName "kube-api-access-wswx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.794521 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "881fb816-c9d1-41ec-8ad8-694cfd0ccdbf" (UID: "881fb816-c9d1-41ec-8ad8-694cfd0ccdbf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.888808 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wswx5\" (UniqueName: \"kubernetes.io/projected/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-kube-api-access-wswx5\") on node \"crc\" DevicePath \"\"" Feb 14 15:30:04 crc kubenswrapper[4750]: I0214 15:30:04.888858 4750 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881fb816-c9d1-41ec-8ad8-694cfd0ccdbf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 15:30:05 crc kubenswrapper[4750]: I0214 15:30:05.205808 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" event={"ID":"881fb816-c9d1-41ec-8ad8-694cfd0ccdbf","Type":"ContainerDied","Data":"f3b3bc40139163ed3b51d0254cc989d422bc56114acce3080c3df43fe07d4ba0"} Feb 14 15:30:05 crc kubenswrapper[4750]: I0214 15:30:05.206426 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b3bc40139163ed3b51d0254cc989d422bc56114acce3080c3df43fe07d4ba0" Feb 14 15:30:05 crc kubenswrapper[4750]: I0214 15:30:05.206596 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518050-hgrg4" Feb 14 15:30:05 crc kubenswrapper[4750]: I0214 15:30:05.777146 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8"] Feb 14 15:30:05 crc kubenswrapper[4750]: I0214 15:30:05.787839 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518005-dcqc8"] Feb 14 15:30:06 crc kubenswrapper[4750]: I0214 15:30:06.760724 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc27cc7-9845-4f76-8ce6-a01f84cf2a41" path="/var/lib/kubelet/pods/6fc27cc7-9845-4f76-8ce6-a01f84cf2a41/volumes" Feb 14 15:30:07 crc kubenswrapper[4750]: I0214 15:30:07.743235 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:30:07 crc kubenswrapper[4750]: E0214 15:30:07.743738 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:30:12 crc kubenswrapper[4750]: I0214 15:30:12.037250 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gtl8p_1fb05431-9eaa-4243-8a3f-fdc9699e102a/cert-manager-controller/0.log" Feb 14 15:30:12 crc kubenswrapper[4750]: I0214 15:30:12.227403 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lns8l_5dcd18b1-7ced-4567-a937-01a9c6c8b66f/cert-manager-cainjector/0.log" Feb 14 15:30:12 crc kubenswrapper[4750]: I0214 15:30:12.280689 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-jttq6_dead2cb0-8f6c-40c2-b4a5-a1eb2a506890/cert-manager-webhook/0.log" Feb 14 15:30:22 crc kubenswrapper[4750]: I0214 15:30:22.742612 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:30:22 crc kubenswrapper[4750]: E0214 15:30:22.744325 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:30:27 crc kubenswrapper[4750]: I0214 15:30:27.403852 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-vwcr5_8f83c9d2-f263-4114-b375-f18f32d91231/nmstate-console-plugin/0.log" Feb 14 15:30:27 crc kubenswrapper[4750]: I0214 15:30:27.572123 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7fb7g_d67741a3-cda0-41c4-ad20-dac649d22a2d/nmstate-handler/0.log" Feb 14 15:30:27 crc kubenswrapper[4750]: I0214 15:30:27.579708 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-v5j7m_85849127-24fe-4e0b-9c43-c0d80d007c66/kube-rbac-proxy/0.log" Feb 14 15:30:27 crc kubenswrapper[4750]: I0214 15:30:27.619035 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-v5j7m_85849127-24fe-4e0b-9c43-c0d80d007c66/nmstate-metrics/0.log" Feb 14 15:30:27 crc kubenswrapper[4750]: I0214 15:30:27.788123 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-zrfxn_32ffd70f-c819-435f-bb5f-a3a705e4052e/nmstate-operator/0.log" Feb 14 15:30:27 crc kubenswrapper[4750]: I0214 15:30:27.842899 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-nsxh6_a1ca23af-ddcc-4041-8e76-7220d4e32212/nmstate-webhook/0.log" Feb 14 15:30:29 crc kubenswrapper[4750]: I0214 15:30:29.927269 4750 scope.go:117] "RemoveContainer" containerID="9b3cc0337be7519e490883c0572c7d890d875875762cffebb60404dbe77847d5" Feb 14 15:30:33 crc kubenswrapper[4750]: I0214 15:30:33.742706 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:30:33 crc kubenswrapper[4750]: E0214 15:30:33.744032 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:30:41 crc kubenswrapper[4750]: I0214 15:30:41.336504 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-846996f79f-rwhb4_b4c5732e-22b3-490f-a53b-d09c07a0a36f/kube-rbac-proxy/0.log" Feb 14 15:30:41 crc kubenswrapper[4750]: I0214 15:30:41.379575 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-846996f79f-rwhb4_b4c5732e-22b3-490f-a53b-d09c07a0a36f/manager/0.log" Feb 14 15:30:47 crc kubenswrapper[4750]: I0214 15:30:47.742350 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:30:47 crc kubenswrapper[4750]: E0214 15:30:47.743218 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:30:56 crc kubenswrapper[4750]: I0214 15:30:56.878424 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6hfcq_874d7068-1761-42b7-8e65-5ea7669259f4/prometheus-operator/0.log" Feb 14 15:30:57 crc kubenswrapper[4750]: I0214 15:30:57.036267 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_75d8f617-5e52-472a-922a-88563b49d041/prometheus-operator-admission-webhook/0.log" Feb 14 15:30:57 crc kubenswrapper[4750]: I0214 15:30:57.077266 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_f776de07-4c75-4295-839f-6e10713be326/prometheus-operator-admission-webhook/0.log" Feb 14 15:30:57 crc kubenswrapper[4750]: I0214 15:30:57.250423 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-xn27s_1852ee04-b190-41b5-8261-d481c237b27d/operator/0.log" Feb 14 15:30:57 crc kubenswrapper[4750]: I0214 15:30:57.269438 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-pdpkt_95744c6b-6feb-4934-b1b6-6d73a3c17ad0/observability-ui-dashboards/0.log" Feb 14 15:30:58 crc kubenswrapper[4750]: I0214 15:30:58.089354 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-vl469_8f6b191a-aa81-4827-80eb-5bbbdf54eeba/perses-operator/0.log" Feb 14 15:30:59 crc kubenswrapper[4750]: I0214 15:30:59.741838 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:30:59 crc kubenswrapper[4750]: E0214 15:30:59.743101 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:31:01 crc kubenswrapper[4750]: I0214 15:31:01.634092 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-85b6899f7-wlrpr" podUID="1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 14 15:31:12 crc kubenswrapper[4750]: I0214 15:31:12.743485 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:31:12 crc kubenswrapper[4750]: E0214 15:31:12.744609 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:31:14 crc kubenswrapper[4750]: I0214 15:31:14.733657 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-8x65s_f13003c2-701e-4806-abb5-23f0e95cf8c2/cluster-logging-operator/0.log" Feb 14 15:31:14 crc kubenswrapper[4750]: I0214 15:31:14.918767 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-s8ptt_a6b3f125-069d-4e80-92bc-3e4c32659e7a/collector/0.log" Feb 14 15:31:14 crc kubenswrapper[4750]: I0214 15:31:14.957420 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_bfa5f022-80f6-4ae5-8734-d6b9b9925490/loki-compactor/0.log" Feb 14 15:31:15 crc kubenswrapper[4750]: I0214 15:31:15.343798 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-zgn87_75cfd9e5-1c5d-4f8c-b736-c7f4d3415033/loki-distributor/0.log" Feb 14 15:31:15 crc kubenswrapper[4750]: I0214 15:31:15.382010 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595d4c559-z8vwl_abdb8ead-5282-4e10-a261-b90509d22bbd/gateway/0.log" Feb 14 15:31:15 crc kubenswrapper[4750]: I0214 15:31:15.520725 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595d4c559-z8vwl_abdb8ead-5282-4e10-a261-b90509d22bbd/opa/0.log" Feb 14 15:31:15 crc kubenswrapper[4750]: I0214 15:31:15.595028 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595d4c559-zsk8f_11746f0c-702d-4684-97e8-46c8b3f2d75a/gateway/0.log" Feb 14 15:31:15 crc kubenswrapper[4750]: I0214 15:31:15.621701 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595d4c559-zsk8f_11746f0c-702d-4684-97e8-46c8b3f2d75a/opa/0.log" Feb 14 15:31:15 crc kubenswrapper[4750]: I0214 15:31:15.775579 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_7d57e203-6e0c-4079-ba36-ffb3c7e69913/loki-index-gateway/0.log" Feb 14 15:31:15 crc kubenswrapper[4750]: I0214 15:31:15.914412 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_d4d23b53-2885-4966-aa62-1e61fd2f2af6/loki-ingester/0.log" Feb 14 15:31:15 crc kubenswrapper[4750]: I0214 15:31:15.994299 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-hwkcr_158c19c7-53f4-4964-89df-ee7509251e08/loki-querier/0.log" Feb 14 15:31:16 crc kubenswrapper[4750]: I0214 15:31:16.118807 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-j8bzm_6bb3081d-4136-43e1-a9a9-9d9b5ce10809/loki-query-frontend/0.log" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.146489 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8jhvp"] Feb 14 15:31:20 crc kubenswrapper[4750]: E0214 15:31:20.148451 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881fb816-c9d1-41ec-8ad8-694cfd0ccdbf" containerName="collect-profiles" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.148490 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="881fb816-c9d1-41ec-8ad8-694cfd0ccdbf" containerName="collect-profiles" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.148746 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="881fb816-c9d1-41ec-8ad8-694cfd0ccdbf" containerName="collect-profiles" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.150780 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.156540 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jhvp"] Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.252635 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2smz5\" (UniqueName: \"kubernetes.io/projected/eedbe2c6-2202-4d81-9514-7afde3e439e3-kube-api-access-2smz5\") pod \"redhat-operators-8jhvp\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.252980 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-catalog-content\") pod \"redhat-operators-8jhvp\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.253362 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-utilities\") pod \"redhat-operators-8jhvp\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.355677 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2smz5\" (UniqueName: \"kubernetes.io/projected/eedbe2c6-2202-4d81-9514-7afde3e439e3-kube-api-access-2smz5\") pod \"redhat-operators-8jhvp\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.355797 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-catalog-content\") pod \"redhat-operators-8jhvp\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.355903 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-utilities\") pod \"redhat-operators-8jhvp\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.356619 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-catalog-content\") pod \"redhat-operators-8jhvp\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.362530 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-utilities\") pod \"redhat-operators-8jhvp\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.385002 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2smz5\" (UniqueName: \"kubernetes.io/projected/eedbe2c6-2202-4d81-9514-7afde3e439e3-kube-api-access-2smz5\") pod \"redhat-operators-8jhvp\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.486274 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:20 crc kubenswrapper[4750]: I0214 15:31:20.985775 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jhvp"] Feb 14 15:31:21 crc kubenswrapper[4750]: I0214 15:31:21.032923 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jhvp" event={"ID":"eedbe2c6-2202-4d81-9514-7afde3e439e3","Type":"ContainerStarted","Data":"4d8e5663f9765dfd99193185d22a66251f501028c3de7388d3bbd931c500c3d3"} Feb 14 15:31:22 crc kubenswrapper[4750]: I0214 15:31:22.044632 4750 generic.go:334] "Generic (PLEG): container finished" podID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerID="d471f828895c07be2facebff382c8c20e8b38ce6f95072a23012597c76930b30" exitCode=0 Feb 14 15:31:22 crc kubenswrapper[4750]: I0214 15:31:22.044719 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jhvp" event={"ID":"eedbe2c6-2202-4d81-9514-7afde3e439e3","Type":"ContainerDied","Data":"d471f828895c07be2facebff382c8c20e8b38ce6f95072a23012597c76930b30"} Feb 14 15:31:22 crc kubenswrapper[4750]: I0214 15:31:22.048847 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 15:31:23 crc kubenswrapper[4750]: I0214 15:31:23.058732 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jhvp" event={"ID":"eedbe2c6-2202-4d81-9514-7afde3e439e3","Type":"ContainerStarted","Data":"907ad1472c4f265db3a4ade89dbc1617d92bd3ae23276a5d59e23b6c6adfba63"} Feb 14 15:31:27 crc kubenswrapper[4750]: I0214 15:31:27.742253 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:31:27 crc kubenswrapper[4750]: E0214 15:31:27.743151 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:31:28 crc kubenswrapper[4750]: I0214 15:31:28.124901 4750 generic.go:334] "Generic (PLEG): container finished" podID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerID="907ad1472c4f265db3a4ade89dbc1617d92bd3ae23276a5d59e23b6c6adfba63" exitCode=0 Feb 14 15:31:28 crc kubenswrapper[4750]: I0214 15:31:28.124992 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jhvp" event={"ID":"eedbe2c6-2202-4d81-9514-7afde3e439e3","Type":"ContainerDied","Data":"907ad1472c4f265db3a4ade89dbc1617d92bd3ae23276a5d59e23b6c6adfba63"} Feb 14 15:31:30 crc kubenswrapper[4750]: I0214 15:31:30.151491 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jhvp" event={"ID":"eedbe2c6-2202-4d81-9514-7afde3e439e3","Type":"ContainerStarted","Data":"74ea21b2a52ae46d507d7b620bed0d6b06aec79424690d810c9cce018712bb57"} Feb 14 15:31:30 crc kubenswrapper[4750]: I0214 15:31:30.176622 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8jhvp" podStartSLOduration=2.992104681 podStartE2EDuration="10.176592831s" podCreationTimestamp="2026-02-14 15:31:20 +0000 UTC" firstStartedPulling="2026-02-14 15:31:22.047128824 +0000 UTC m=+5954.073118325" lastFinishedPulling="2026-02-14 15:31:29.231616984 +0000 UTC m=+5961.257606475" observedRunningTime="2026-02-14 15:31:30.170625241 +0000 UTC m=+5962.196614722" watchObservedRunningTime="2026-02-14 15:31:30.176592831 +0000 UTC m=+5962.202582332" Feb 14 15:31:30 crc kubenswrapper[4750]: I0214 15:31:30.487220 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:30 crc kubenswrapper[4750]: I0214 15:31:30.487290 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:31:32 crc kubenswrapper[4750]: I0214 15:31:32.118369 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jhvp" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="registry-server" probeResult="failure" output=< Feb 14 15:31:32 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:31:32 crc kubenswrapper[4750]: > Feb 14 15:31:33 crc kubenswrapper[4750]: I0214 15:31:33.436585 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-v9m54_da5b2754-b4d8-46a1-ad93-926e2ae005eb/kube-rbac-proxy/0.log" Feb 14 15:31:33 crc kubenswrapper[4750]: I0214 15:31:33.585976 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-v9m54_da5b2754-b4d8-46a1-ad93-926e2ae005eb/controller/0.log" Feb 14 15:31:33 crc kubenswrapper[4750]: I0214 15:31:33.638850 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-frr-files/0.log" Feb 14 15:31:33 crc kubenswrapper[4750]: I0214 15:31:33.898060 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-frr-files/0.log" Feb 14 15:31:33 crc kubenswrapper[4750]: I0214 15:31:33.939219 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-reloader/0.log" Feb 14 15:31:33 crc kubenswrapper[4750]: I0214 15:31:33.985415 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-reloader/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.016103 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-metrics/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.189938 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-frr-files/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.215072 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-metrics/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.215177 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-reloader/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.249261 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-metrics/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.431039 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-frr-files/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.461264 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-metrics/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.487208 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-reloader/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.504645 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/controller/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.676223 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/frr-metrics/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.717822 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/kube-rbac-proxy/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.793790 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/kube-rbac-proxy-frr/0.log" Feb 14 15:31:34 crc kubenswrapper[4750]: I0214 15:31:34.967775 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/reloader/0.log" Feb 14 15:31:35 crc kubenswrapper[4750]: I0214 15:31:35.073346 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-8ljxj_b2892e47-7716-4ebc-86ef-376d952f3546/frr-k8s-webhook-server/0.log" Feb 14 15:31:35 crc kubenswrapper[4750]: I0214 15:31:35.407652 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-86bdb8fc5c-ps8mb_863644f3-8ec4-4391-a74e-7fe2d8dc4b3c/manager/0.log" Feb 14 15:31:35 crc kubenswrapper[4750]: I0214 15:31:35.644581 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wbx4h_9f176239-5523-47f3-909c-e7c77b65acf5/kube-rbac-proxy/0.log" Feb 14 15:31:35 crc kubenswrapper[4750]: I0214 15:31:35.684855 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5967b4f7c5-67sfd_bbf10811-6f76-4024-83ce-7263f00af6bb/webhook-server/0.log" Feb 14 15:31:36 crc kubenswrapper[4750]: I0214 15:31:36.496012 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wbx4h_9f176239-5523-47f3-909c-e7c77b65acf5/speaker/0.log" Feb 14 15:31:36 crc kubenswrapper[4750]: I0214 15:31:36.643175 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/frr/0.log" Feb 14 15:31:38 crc kubenswrapper[4750]: I0214 15:31:38.751439 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:31:38 crc kubenswrapper[4750]: E0214 15:31:38.752460 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:31:41 crc kubenswrapper[4750]: I0214 15:31:41.547675 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jhvp" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="registry-server" probeResult="failure" output=< Feb 14 15:31:41 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:31:41 crc kubenswrapper[4750]: > Feb 14 15:31:51 crc kubenswrapper[4750]: I0214 15:31:51.553674 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jhvp" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="registry-server" probeResult="failure" output=< Feb 14 15:31:51 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:31:51 crc kubenswrapper[4750]: > Feb 14 15:31:51 crc kubenswrapper[4750]: I0214 15:31:51.925563 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/util/0.log" Feb 14 15:31:52 crc kubenswrapper[4750]: I0214 15:31:52.063789 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/util/0.log" Feb 14 15:31:52 crc kubenswrapper[4750]: I0214 15:31:52.071132 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/pull/0.log" Feb 14 15:31:52 crc kubenswrapper[4750]: I0214 15:31:52.106711 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/pull/0.log" Feb 14 15:31:52 crc kubenswrapper[4750]: I0214 15:31:52.338917 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/util/0.log" Feb 14 15:31:52 crc kubenswrapper[4750]: I0214 15:31:52.368323 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/extract/0.log" Feb 14 15:31:52 crc kubenswrapper[4750]: I0214 15:31:52.394711 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/pull/0.log" Feb 14 15:31:52 crc kubenswrapper[4750]: I0214 15:31:52.742975 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:31:52 crc kubenswrapper[4750]: E0214 15:31:52.743680 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:31:53 crc kubenswrapper[4750]: I0214 15:31:53.041748 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/util/0.log" Feb 14 15:31:53 crc kubenswrapper[4750]: I0214 15:31:53.262404 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/pull/0.log" Feb 14 15:31:53 crc kubenswrapper[4750]: I0214 15:31:53.293088 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/pull/0.log" Feb 14 15:31:53 crc kubenswrapper[4750]: I0214 15:31:53.311459 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/util/0.log" Feb 14 15:31:53 crc kubenswrapper[4750]: I0214 15:31:53.638745 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/pull/0.log" Feb 14 15:31:53 crc kubenswrapper[4750]: I0214 15:31:53.654760 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/extract/0.log" Feb 14 15:31:53 crc kubenswrapper[4750]: I0214 15:31:53.656045 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/util/0.log" Feb 14 15:31:53 crc kubenswrapper[4750]: I0214 15:31:53.824557 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/util/0.log" Feb 14 15:31:54 crc kubenswrapper[4750]: I0214 15:31:54.003779 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/pull/0.log" Feb 14 15:31:54 crc kubenswrapper[4750]: I0214 15:31:54.032464 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/util/0.log" Feb 14 15:31:54 crc kubenswrapper[4750]: I0214 15:31:54.036355 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/pull/0.log" Feb 14 15:31:54 crc kubenswrapper[4750]: I0214 15:31:54.186820 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/util/0.log" Feb 14 15:31:54 crc kubenswrapper[4750]: I0214 15:31:54.187663 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/pull/0.log" Feb 14 15:31:54 crc kubenswrapper[4750]: I0214 15:31:54.221563 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/extract/0.log" Feb 14 15:31:54 crc kubenswrapper[4750]: I0214 15:31:54.347311 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-utilities/0.log" Feb 14 15:31:55 crc kubenswrapper[4750]: I0214 15:31:55.146766 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-utilities/0.log" Feb 14 15:31:55 crc kubenswrapper[4750]: I0214 15:31:55.190104 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-content/0.log" Feb 14 15:31:55 crc kubenswrapper[4750]: I0214 15:31:55.193574 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-content/0.log" Feb 14 15:31:55 crc kubenswrapper[4750]: I0214 15:31:55.440746 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-utilities/0.log" Feb 14 15:31:55 crc kubenswrapper[4750]: I0214 15:31:55.527626 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-content/0.log" Feb 14 15:31:55 crc kubenswrapper[4750]: I0214 15:31:55.660705 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-utilities/0.log" Feb 14 15:31:55 crc kubenswrapper[4750]: I0214 15:31:55.912750 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-utilities/0.log" Feb 14 15:31:55 crc kubenswrapper[4750]: I0214 15:31:55.915914 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-content/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.035538 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-content/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.154838 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-utilities/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.174678 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-content/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.221571 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/registry-server/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.339747 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/util/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.581104 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/util/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.645798 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/pull/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.659996 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/pull/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.827689 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/pull/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.879058 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/util/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.925813 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/extract/0.log" Feb 14 15:31:56 crc kubenswrapper[4750]: I0214 15:31:56.989850 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/registry-server/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.047990 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/util/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.220923 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/pull/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.221782 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/pull/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.222316 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/util/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.418669 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/util/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.430651 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/pull/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.452938 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/extract/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.463432 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tdtms_b634ea23-ca70-446c-8a62-0910256d9025/marketplace-operator/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.609072 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-utilities/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.790285 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-content/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.805468 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-content/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.814972 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-utilities/0.log" Feb 14 15:31:57 crc kubenswrapper[4750]: I0214 15:31:57.965985 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-content/0.log" Feb 14 15:31:58 crc kubenswrapper[4750]: I0214 15:31:58.182000 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-utilities/0.log" Feb 14 15:31:58 crc kubenswrapper[4750]: I0214 15:31:58.201415 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-utilities/0.log" Feb 14 15:31:58 crc kubenswrapper[4750]: I0214 15:31:58.393881 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/registry-server/0.log" Feb 14 15:31:58 crc kubenswrapper[4750]: I0214 15:31:58.466810 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-content/0.log" Feb 14 15:31:58 crc kubenswrapper[4750]: I0214 15:31:58.473587 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-content/0.log" Feb 14 15:31:58 crc kubenswrapper[4750]: I0214 15:31:58.478401 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-utilities/0.log" Feb 14 15:31:58 crc kubenswrapper[4750]: I0214 15:31:58.690446 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-utilities/0.log" Feb 14 15:31:58 crc kubenswrapper[4750]: I0214 15:31:58.696269 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-content/0.log" Feb 14 15:31:58 crc kubenswrapper[4750]: I0214 15:31:58.748689 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8jhvp_eedbe2c6-2202-4d81-9514-7afde3e439e3/extract-utilities/0.log" Feb 14 15:31:59 crc kubenswrapper[4750]: I0214 15:31:59.052514 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8jhvp_eedbe2c6-2202-4d81-9514-7afde3e439e3/extract-utilities/0.log" Feb 14 15:31:59 crc kubenswrapper[4750]: I0214 15:31:59.066907 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8jhvp_eedbe2c6-2202-4d81-9514-7afde3e439e3/extract-content/0.log" Feb 14 15:31:59 crc kubenswrapper[4750]: I0214 15:31:59.075065 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8jhvp_eedbe2c6-2202-4d81-9514-7afde3e439e3/extract-content/0.log" Feb 14 15:31:59 crc kubenswrapper[4750]: I0214 15:31:59.329619 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8jhvp_eedbe2c6-2202-4d81-9514-7afde3e439e3/extract-content/0.log" Feb 14 15:31:59 crc kubenswrapper[4750]: I0214 15:31:59.332721 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/registry-server/0.log" Feb 14 15:31:59 crc kubenswrapper[4750]: I0214 15:31:59.345265 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8jhvp_eedbe2c6-2202-4d81-9514-7afde3e439e3/extract-utilities/0.log" Feb 14 15:31:59 crc kubenswrapper[4750]: I0214 15:31:59.346479 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8jhvp_eedbe2c6-2202-4d81-9514-7afde3e439e3/registry-server/0.log" Feb 14 15:32:01 crc kubenswrapper[4750]: I0214 15:32:01.542327 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jhvp" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="registry-server" probeResult="failure" output=< Feb 14 15:32:01 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:32:01 crc kubenswrapper[4750]: > Feb 14 15:32:07 crc kubenswrapper[4750]: I0214 15:32:07.742488 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:32:07 crc kubenswrapper[4750]: E0214 15:32:07.750931 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:32:10 crc kubenswrapper[4750]: I0214 15:32:10.545176 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:32:10 crc kubenswrapper[4750]: I0214 15:32:10.615414 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:32:12 crc kubenswrapper[4750]: I0214 15:32:12.113727 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jhvp"] Feb 14 15:32:12 crc kubenswrapper[4750]: I0214 15:32:12.116623 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8jhvp" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="registry-server" containerID="cri-o://74ea21b2a52ae46d507d7b620bed0d6b06aec79424690d810c9cce018712bb57" gracePeriod=2 Feb 14 15:32:12 crc kubenswrapper[4750]: I0214 15:32:12.631282 4750 generic.go:334] "Generic (PLEG): container finished" podID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerID="74ea21b2a52ae46d507d7b620bed0d6b06aec79424690d810c9cce018712bb57" exitCode=0 Feb 14 15:32:12 crc kubenswrapper[4750]: I0214 15:32:12.631323 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jhvp" event={"ID":"eedbe2c6-2202-4d81-9514-7afde3e439e3","Type":"ContainerDied","Data":"74ea21b2a52ae46d507d7b620bed0d6b06aec79424690d810c9cce018712bb57"} Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.214913 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.242855 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2smz5\" (UniqueName: \"kubernetes.io/projected/eedbe2c6-2202-4d81-9514-7afde3e439e3-kube-api-access-2smz5\") pod \"eedbe2c6-2202-4d81-9514-7afde3e439e3\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.243456 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-catalog-content\") pod \"eedbe2c6-2202-4d81-9514-7afde3e439e3\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.243574 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-utilities\") pod \"eedbe2c6-2202-4d81-9514-7afde3e439e3\" (UID: \"eedbe2c6-2202-4d81-9514-7afde3e439e3\") " Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.244477 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-utilities" (OuterVolumeSpecName: "utilities") pod "eedbe2c6-2202-4d81-9514-7afde3e439e3" (UID: "eedbe2c6-2202-4d81-9514-7afde3e439e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.288286 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eedbe2c6-2202-4d81-9514-7afde3e439e3-kube-api-access-2smz5" (OuterVolumeSpecName: "kube-api-access-2smz5") pod "eedbe2c6-2202-4d81-9514-7afde3e439e3" (UID: "eedbe2c6-2202-4d81-9514-7afde3e439e3"). InnerVolumeSpecName "kube-api-access-2smz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.347699 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.347736 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2smz5\" (UniqueName: \"kubernetes.io/projected/eedbe2c6-2202-4d81-9514-7afde3e439e3-kube-api-access-2smz5\") on node \"crc\" DevicePath \"\"" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.362051 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eedbe2c6-2202-4d81-9514-7afde3e439e3" (UID: "eedbe2c6-2202-4d81-9514-7afde3e439e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.449995 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedbe2c6-2202-4d81-9514-7afde3e439e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.642420 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jhvp" event={"ID":"eedbe2c6-2202-4d81-9514-7afde3e439e3","Type":"ContainerDied","Data":"4d8e5663f9765dfd99193185d22a66251f501028c3de7388d3bbd931c500c3d3"} Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.642553 4750 scope.go:117] "RemoveContainer" containerID="74ea21b2a52ae46d507d7b620bed0d6b06aec79424690d810c9cce018712bb57" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.642558 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jhvp" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.674371 4750 scope.go:117] "RemoveContainer" containerID="907ad1472c4f265db3a4ade89dbc1617d92bd3ae23276a5d59e23b6c6adfba63" Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.687513 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jhvp"] Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.702895 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8jhvp"] Feb 14 15:32:13 crc kubenswrapper[4750]: I0214 15:32:13.707311 4750 scope.go:117] "RemoveContainer" containerID="d471f828895c07be2facebff382c8c20e8b38ce6f95072a23012597c76930b30" Feb 14 15:32:14 crc kubenswrapper[4750]: I0214 15:32:14.311714 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6hfcq_874d7068-1761-42b7-8e65-5ea7669259f4/prometheus-operator/0.log" Feb 14 15:32:14 crc kubenswrapper[4750]: I0214 15:32:14.354705 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_75d8f617-5e52-472a-922a-88563b49d041/prometheus-operator-admission-webhook/0.log" Feb 14 15:32:14 crc kubenswrapper[4750]: I0214 15:32:14.383386 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_f776de07-4c75-4295-839f-6e10713be326/prometheus-operator-admission-webhook/0.log" Feb 14 15:32:14 crc kubenswrapper[4750]: I0214 15:32:14.506475 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-xn27s_1852ee04-b190-41b5-8261-d481c237b27d/operator/0.log" Feb 14 15:32:14 crc kubenswrapper[4750]: I0214 15:32:14.542741 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-pdpkt_95744c6b-6feb-4934-b1b6-6d73a3c17ad0/observability-ui-dashboards/0.log" Feb 14 15:32:14 crc kubenswrapper[4750]: I0214 15:32:14.565435 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-vl469_8f6b191a-aa81-4827-80eb-5bbbdf54eeba/perses-operator/0.log" Feb 14 15:32:14 crc kubenswrapper[4750]: I0214 15:32:14.763542 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" path="/var/lib/kubelet/pods/eedbe2c6-2202-4d81-9514-7afde3e439e3/volumes" Feb 14 15:32:19 crc kubenswrapper[4750]: I0214 15:32:19.741826 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:32:19 crc kubenswrapper[4750]: E0214 15:32:19.743410 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:32:29 crc kubenswrapper[4750]: I0214 15:32:29.217910 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-846996f79f-rwhb4_b4c5732e-22b3-490f-a53b-d09c07a0a36f/kube-rbac-proxy/0.log" Feb 14 15:32:29 crc kubenswrapper[4750]: I0214 15:32:29.268199 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-846996f79f-rwhb4_b4c5732e-22b3-490f-a53b-d09c07a0a36f/manager/0.log" Feb 14 15:32:34 crc kubenswrapper[4750]: I0214 15:32:34.743405 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:32:35 crc kubenswrapper[4750]: I0214 15:32:35.936198 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"4f4bdec9fd2c22d705b7ae9c5d115cf1eb8ca3b9e424085f60383ba37f38e84a"} Feb 14 15:32:44 crc kubenswrapper[4750]: E0214 15:32:44.783645 4750 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:59650->38.102.83.36:35453: write tcp 38.102.83.36:59650->38.102.83.36:35453: write: connection reset by peer Feb 14 15:33:30 crc kubenswrapper[4750]: I0214 15:33:30.083838 4750 scope.go:117] "RemoveContainer" containerID="d44f994654a9535a9d48dec9bd3899d9aa50440713e46fdb5e4dc25157798c2a" Feb 14 15:34:30 crc kubenswrapper[4750]: I0214 15:34:30.228907 4750 scope.go:117] "RemoveContainer" containerID="642ab56613c6b8feceb8f2f7e593aa500ff8b49d6d54bbf3b2aa026d800bb255" Feb 14 15:34:36 crc kubenswrapper[4750]: I0214 15:34:36.435479 4750 generic.go:334] "Generic (PLEG): container finished" podID="f3d51424-6083-455e-b9be-7ddc3b521a8d" containerID="1abd7d6d262ba9b37a2320d2f9c56c9186a56d204949e476d28b78acfe56e82d" exitCode=0 Feb 14 15:34:36 crc kubenswrapper[4750]: I0214 15:34:36.435593 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2q8d6/must-gather-schln" event={"ID":"f3d51424-6083-455e-b9be-7ddc3b521a8d","Type":"ContainerDied","Data":"1abd7d6d262ba9b37a2320d2f9c56c9186a56d204949e476d28b78acfe56e82d"} Feb 14 15:34:36 crc kubenswrapper[4750]: I0214 15:34:36.437334 4750 scope.go:117] "RemoveContainer" containerID="1abd7d6d262ba9b37a2320d2f9c56c9186a56d204949e476d28b78acfe56e82d" Feb 14 15:34:36 crc kubenswrapper[4750]: I0214 15:34:36.766454 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2q8d6_must-gather-schln_f3d51424-6083-455e-b9be-7ddc3b521a8d/gather/0.log" Feb 14 15:34:45 crc kubenswrapper[4750]: I0214 15:34:45.384033 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2q8d6/must-gather-schln"] Feb 14 15:34:45 crc kubenswrapper[4750]: I0214 15:34:45.384798 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2q8d6/must-gather-schln" podUID="f3d51424-6083-455e-b9be-7ddc3b521a8d" containerName="copy" containerID="cri-o://91e2acc658592e2c2b31addc3926767b04d268e06a70b5ed2075e8d119d557e2" gracePeriod=2 Feb 14 15:34:45 crc kubenswrapper[4750]: I0214 15:34:45.397013 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2q8d6/must-gather-schln"] Feb 14 15:34:45 crc kubenswrapper[4750]: I0214 15:34:45.563009 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2q8d6_must-gather-schln_f3d51424-6083-455e-b9be-7ddc3b521a8d/copy/0.log" Feb 14 15:34:45 crc kubenswrapper[4750]: I0214 15:34:45.563638 4750 generic.go:334] "Generic (PLEG): container finished" podID="f3d51424-6083-455e-b9be-7ddc3b521a8d" containerID="91e2acc658592e2c2b31addc3926767b04d268e06a70b5ed2075e8d119d557e2" exitCode=143 Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.023243 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2q8d6_must-gather-schln_f3d51424-6083-455e-b9be-7ddc3b521a8d/copy/0.log" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.024089 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.145683 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tvcd\" (UniqueName: \"kubernetes.io/projected/f3d51424-6083-455e-b9be-7ddc3b521a8d-kube-api-access-2tvcd\") pod \"f3d51424-6083-455e-b9be-7ddc3b521a8d\" (UID: \"f3d51424-6083-455e-b9be-7ddc3b521a8d\") " Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.145845 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f3d51424-6083-455e-b9be-7ddc3b521a8d-must-gather-output\") pod \"f3d51424-6083-455e-b9be-7ddc3b521a8d\" (UID: \"f3d51424-6083-455e-b9be-7ddc3b521a8d\") " Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.153795 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d51424-6083-455e-b9be-7ddc3b521a8d-kube-api-access-2tvcd" (OuterVolumeSpecName: "kube-api-access-2tvcd") pod "f3d51424-6083-455e-b9be-7ddc3b521a8d" (UID: "f3d51424-6083-455e-b9be-7ddc3b521a8d"). InnerVolumeSpecName "kube-api-access-2tvcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.249025 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tvcd\" (UniqueName: \"kubernetes.io/projected/f3d51424-6083-455e-b9be-7ddc3b521a8d-kube-api-access-2tvcd\") on node \"crc\" DevicePath \"\"" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.364609 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d51424-6083-455e-b9be-7ddc3b521a8d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f3d51424-6083-455e-b9be-7ddc3b521a8d" (UID: "f3d51424-6083-455e-b9be-7ddc3b521a8d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.453494 4750 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f3d51424-6083-455e-b9be-7ddc3b521a8d-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.593346 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2q8d6_must-gather-schln_f3d51424-6083-455e-b9be-7ddc3b521a8d/copy/0.log" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.594610 4750 scope.go:117] "RemoveContainer" containerID="91e2acc658592e2c2b31addc3926767b04d268e06a70b5ed2075e8d119d557e2" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.595788 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2q8d6/must-gather-schln" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.649485 4750 scope.go:117] "RemoveContainer" containerID="1abd7d6d262ba9b37a2320d2f9c56c9186a56d204949e476d28b78acfe56e82d" Feb 14 15:34:46 crc kubenswrapper[4750]: I0214 15:34:46.755162 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d51424-6083-455e-b9be-7ddc3b521a8d" path="/var/lib/kubelet/pods/f3d51424-6083-455e-b9be-7ddc3b521a8d/volumes" Feb 14 15:35:00 crc kubenswrapper[4750]: I0214 15:35:00.128690 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:35:00 crc kubenswrapper[4750]: I0214 15:35:00.129370 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:35:30 crc kubenswrapper[4750]: I0214 15:35:30.129716 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:35:30 crc kubenswrapper[4750]: I0214 15:35:30.130563 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:36:00 crc kubenswrapper[4750]: I0214 15:36:00.129624 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:36:00 crc kubenswrapper[4750]: I0214 15:36:00.130267 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:36:00 crc kubenswrapper[4750]: I0214 15:36:00.130320 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 15:36:00 crc kubenswrapper[4750]: I0214 15:36:00.131506 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f4bdec9fd2c22d705b7ae9c5d115cf1eb8ca3b9e424085f60383ba37f38e84a"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 15:36:00 crc kubenswrapper[4750]: I0214 15:36:00.131602 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://4f4bdec9fd2c22d705b7ae9c5d115cf1eb8ca3b9e424085f60383ba37f38e84a" gracePeriod=600 Feb 14 15:36:00 crc kubenswrapper[4750]: I0214 15:36:00.541396 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="4f4bdec9fd2c22d705b7ae9c5d115cf1eb8ca3b9e424085f60383ba37f38e84a" exitCode=0 Feb 14 15:36:00 crc kubenswrapper[4750]: I0214 15:36:00.541463 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"4f4bdec9fd2c22d705b7ae9c5d115cf1eb8ca3b9e424085f60383ba37f38e84a"} Feb 14 15:36:00 crc kubenswrapper[4750]: I0214 15:36:00.541671 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0"} Feb 14 15:36:00 crc kubenswrapper[4750]: I0214 15:36:00.541689 4750 scope.go:117] "RemoveContainer" containerID="046faa2bbfab2d89e7e8c30b6f7e0bdd7c4069aed6bef67d80b43c2508937e0b" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.057860 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mcnd6"] Feb 14 15:37:42 crc kubenswrapper[4750]: E0214 15:37:42.062984 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d51424-6083-455e-b9be-7ddc3b521a8d" containerName="gather" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.063011 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d51424-6083-455e-b9be-7ddc3b521a8d" containerName="gather" Feb 14 15:37:42 crc kubenswrapper[4750]: E0214 15:37:42.063035 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="registry-server" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.063045 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="registry-server" Feb 14 15:37:42 crc kubenswrapper[4750]: E0214 15:37:42.063083 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d51424-6083-455e-b9be-7ddc3b521a8d" containerName="copy" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.063092 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d51424-6083-455e-b9be-7ddc3b521a8d" containerName="copy" Feb 14 15:37:42 crc kubenswrapper[4750]: E0214 15:37:42.063105 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="extract-utilities" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.063154 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="extract-utilities" Feb 14 15:37:42 crc kubenswrapper[4750]: E0214 15:37:42.063203 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="extract-content" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.063212 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="extract-content" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.064474 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d51424-6083-455e-b9be-7ddc3b521a8d" containerName="gather" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.064512 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="eedbe2c6-2202-4d81-9514-7afde3e439e3" containerName="registry-server" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.064546 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d51424-6083-455e-b9be-7ddc3b521a8d" containerName="copy" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.072468 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.116684 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h57gq\" (UniqueName: \"kubernetes.io/projected/a3be7a6e-a418-444d-8f6d-677860b493e0-kube-api-access-h57gq\") pod \"redhat-marketplace-mcnd6\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.117150 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-utilities\") pod \"redhat-marketplace-mcnd6\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.117332 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-catalog-content\") pod \"redhat-marketplace-mcnd6\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.163385 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcnd6"] Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.219855 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-catalog-content\") pod \"redhat-marketplace-mcnd6\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.220318 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h57gq\" (UniqueName: \"kubernetes.io/projected/a3be7a6e-a418-444d-8f6d-677860b493e0-kube-api-access-h57gq\") pod \"redhat-marketplace-mcnd6\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.220470 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-catalog-content\") pod \"redhat-marketplace-mcnd6\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.220710 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-utilities\") pod \"redhat-marketplace-mcnd6\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.221003 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-utilities\") pod \"redhat-marketplace-mcnd6\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.269507 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h57gq\" (UniqueName: \"kubernetes.io/projected/a3be7a6e-a418-444d-8f6d-677860b493e0-kube-api-access-h57gq\") pod \"redhat-marketplace-mcnd6\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:42 crc kubenswrapper[4750]: I0214 15:37:42.441084 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:43 crc kubenswrapper[4750]: I0214 15:37:43.228679 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcnd6"] Feb 14 15:37:43 crc kubenswrapper[4750]: W0214 15:37:43.252338 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3be7a6e_a418_444d_8f6d_677860b493e0.slice/crio-bd6263cae2d2dd9b85c5edf1dbb56822e674825a6659da89d02d80891c867568 WatchSource:0}: Error finding container bd6263cae2d2dd9b85c5edf1dbb56822e674825a6659da89d02d80891c867568: Status 404 returned error can't find the container with id bd6263cae2d2dd9b85c5edf1dbb56822e674825a6659da89d02d80891c867568 Feb 14 15:37:43 crc kubenswrapper[4750]: I0214 15:37:43.918152 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcnd6" event={"ID":"a3be7a6e-a418-444d-8f6d-677860b493e0","Type":"ContainerDied","Data":"5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895"} Feb 14 15:37:43 crc kubenswrapper[4750]: I0214 15:37:43.918097 4750 generic.go:334] "Generic (PLEG): container finished" podID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerID="5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895" exitCode=0 Feb 14 15:37:43 crc kubenswrapper[4750]: I0214 15:37:43.918444 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcnd6" event={"ID":"a3be7a6e-a418-444d-8f6d-677860b493e0","Type":"ContainerStarted","Data":"bd6263cae2d2dd9b85c5edf1dbb56822e674825a6659da89d02d80891c867568"} Feb 14 15:37:43 crc kubenswrapper[4750]: I0214 15:37:43.923089 4750 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 15:37:45 crc kubenswrapper[4750]: I0214 15:37:45.945598 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcnd6" event={"ID":"a3be7a6e-a418-444d-8f6d-677860b493e0","Type":"ContainerStarted","Data":"11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc"} Feb 14 15:37:47 crc kubenswrapper[4750]: I0214 15:37:47.967577 4750 generic.go:334] "Generic (PLEG): container finished" podID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerID="11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc" exitCode=0 Feb 14 15:37:47 crc kubenswrapper[4750]: I0214 15:37:47.967650 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcnd6" event={"ID":"a3be7a6e-a418-444d-8f6d-677860b493e0","Type":"ContainerDied","Data":"11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc"} Feb 14 15:37:48 crc kubenswrapper[4750]: I0214 15:37:48.981920 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcnd6" event={"ID":"a3be7a6e-a418-444d-8f6d-677860b493e0","Type":"ContainerStarted","Data":"642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5"} Feb 14 15:37:49 crc kubenswrapper[4750]: I0214 15:37:49.029089 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mcnd6" podStartSLOduration=3.2812552249999998 podStartE2EDuration="8.028637036s" podCreationTimestamp="2026-02-14 15:37:41 +0000 UTC" firstStartedPulling="2026-02-14 15:37:43.919814326 +0000 UTC m=+6335.945803807" lastFinishedPulling="2026-02-14 15:37:48.667196147 +0000 UTC m=+6340.693185618" observedRunningTime="2026-02-14 15:37:49.0157752 +0000 UTC m=+6341.041764691" watchObservedRunningTime="2026-02-14 15:37:49.028637036 +0000 UTC m=+6341.054626517" Feb 14 15:37:52 crc kubenswrapper[4750]: I0214 15:37:52.441997 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:52 crc kubenswrapper[4750]: I0214 15:37:52.442904 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:52 crc kubenswrapper[4750]: I0214 15:37:52.514620 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:53 crc kubenswrapper[4750]: I0214 15:37:53.096805 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:53 crc kubenswrapper[4750]: I0214 15:37:53.234827 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcnd6"] Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.048194 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mcnd6" podUID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerName="registry-server" containerID="cri-o://642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5" gracePeriod=2 Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.372332 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5v79s/must-gather-8q79t"] Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.374759 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.391851 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5v79s"/"openshift-service-ca.crt" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.391848 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5v79s"/"kube-root-ca.crt" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.399466 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55c5z\" (UniqueName: \"kubernetes.io/projected/a06d5eb3-be79-41e9-9b6f-ad18672dca78-kube-api-access-55c5z\") pod \"must-gather-8q79t\" (UID: \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\") " pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.399691 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a06d5eb3-be79-41e9-9b6f-ad18672dca78-must-gather-output\") pod \"must-gather-8q79t\" (UID: \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\") " pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.417319 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5v79s/must-gather-8q79t"] Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.504375 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a06d5eb3-be79-41e9-9b6f-ad18672dca78-must-gather-output\") pod \"must-gather-8q79t\" (UID: \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\") " pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.504557 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55c5z\" (UniqueName: \"kubernetes.io/projected/a06d5eb3-be79-41e9-9b6f-ad18672dca78-kube-api-access-55c5z\") pod \"must-gather-8q79t\" (UID: \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\") " pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.505672 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a06d5eb3-be79-41e9-9b6f-ad18672dca78-must-gather-output\") pod \"must-gather-8q79t\" (UID: \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\") " pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.590057 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55c5z\" (UniqueName: \"kubernetes.io/projected/a06d5eb3-be79-41e9-9b6f-ad18672dca78-kube-api-access-55c5z\") pod \"must-gather-8q79t\" (UID: \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\") " pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.698795 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.857577 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.921319 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-catalog-content\") pod \"a3be7a6e-a418-444d-8f6d-677860b493e0\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.921442 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-utilities\") pod \"a3be7a6e-a418-444d-8f6d-677860b493e0\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.921486 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h57gq\" (UniqueName: \"kubernetes.io/projected/a3be7a6e-a418-444d-8f6d-677860b493e0-kube-api-access-h57gq\") pod \"a3be7a6e-a418-444d-8f6d-677860b493e0\" (UID: \"a3be7a6e-a418-444d-8f6d-677860b493e0\") " Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.922265 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-utilities" (OuterVolumeSpecName: "utilities") pod "a3be7a6e-a418-444d-8f6d-677860b493e0" (UID: "a3be7a6e-a418-444d-8f6d-677860b493e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.929405 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3be7a6e-a418-444d-8f6d-677860b493e0-kube-api-access-h57gq" (OuterVolumeSpecName: "kube-api-access-h57gq") pod "a3be7a6e-a418-444d-8f6d-677860b493e0" (UID: "a3be7a6e-a418-444d-8f6d-677860b493e0"). InnerVolumeSpecName "kube-api-access-h57gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:37:55 crc kubenswrapper[4750]: I0214 15:37:55.949690 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3be7a6e-a418-444d-8f6d-677860b493e0" (UID: "a3be7a6e-a418-444d-8f6d-677860b493e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.024720 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.025002 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3be7a6e-a418-444d-8f6d-677860b493e0-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.025015 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h57gq\" (UniqueName: \"kubernetes.io/projected/a3be7a6e-a418-444d-8f6d-677860b493e0-kube-api-access-h57gq\") on node \"crc\" DevicePath \"\"" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.061507 4750 generic.go:334] "Generic (PLEG): container finished" podID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerID="642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5" exitCode=0 Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.061551 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcnd6" event={"ID":"a3be7a6e-a418-444d-8f6d-677860b493e0","Type":"ContainerDied","Data":"642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5"} Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.061583 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcnd6" event={"ID":"a3be7a6e-a418-444d-8f6d-677860b493e0","Type":"ContainerDied","Data":"bd6263cae2d2dd9b85c5edf1dbb56822e674825a6659da89d02d80891c867568"} Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.061599 4750 scope.go:117] "RemoveContainer" containerID="642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.061600 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcnd6" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.095722 4750 scope.go:117] "RemoveContainer" containerID="11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.103276 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcnd6"] Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.112795 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcnd6"] Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.125820 4750 scope.go:117] "RemoveContainer" containerID="5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.152879 4750 scope.go:117] "RemoveContainer" containerID="642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5" Feb 14 15:37:56 crc kubenswrapper[4750]: E0214 15:37:56.153930 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5\": container with ID starting with 642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5 not found: ID does not exist" containerID="642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.153972 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5"} err="failed to get container status \"642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5\": rpc error: code = NotFound desc = could not find container \"642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5\": container with ID starting with 642134554e12765b4137917262cebaf3bbdfbce4d21d188ed0cd776012401dd5 not found: ID does not exist" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.154000 4750 scope.go:117] "RemoveContainer" containerID="11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc" Feb 14 15:37:56 crc kubenswrapper[4750]: E0214 15:37:56.154516 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc\": container with ID starting with 11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc not found: ID does not exist" containerID="11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.154536 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc"} err="failed to get container status \"11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc\": rpc error: code = NotFound desc = could not find container \"11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc\": container with ID starting with 11d694cc7c2542be13e2612b2298f5c27e3e981fe6dfb61ecac87087194dc9dc not found: ID does not exist" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.154549 4750 scope.go:117] "RemoveContainer" containerID="5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895" Feb 14 15:37:56 crc kubenswrapper[4750]: E0214 15:37:56.155023 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895\": container with ID starting with 5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895 not found: ID does not exist" containerID="5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.155040 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895"} err="failed to get container status \"5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895\": rpc error: code = NotFound desc = could not find container \"5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895\": container with ID starting with 5b83f575901b4b6f18164501f4638a1499ffb9bd440e697e01de752d3ce65895 not found: ID does not exist" Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.339066 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5v79s/must-gather-8q79t"] Feb 14 15:37:56 crc kubenswrapper[4750]: I0214 15:37:56.754544 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3be7a6e-a418-444d-8f6d-677860b493e0" path="/var/lib/kubelet/pods/a3be7a6e-a418-444d-8f6d-677860b493e0/volumes" Feb 14 15:37:57 crc kubenswrapper[4750]: I0214 15:37:57.076253 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/must-gather-8q79t" event={"ID":"a06d5eb3-be79-41e9-9b6f-ad18672dca78","Type":"ContainerStarted","Data":"c8056789c1428e1b3f33e96d9ccc97c53695b041b28be5a05b9119c74a4a2dae"} Feb 14 15:37:57 crc kubenswrapper[4750]: I0214 15:37:57.076292 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/must-gather-8q79t" event={"ID":"a06d5eb3-be79-41e9-9b6f-ad18672dca78","Type":"ContainerStarted","Data":"cc957ad0192b50c6473ff4b3a5b7b20f65711e1db0517c4e7c8c60db0a3b992f"} Feb 14 15:37:57 crc kubenswrapper[4750]: I0214 15:37:57.076306 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/must-gather-8q79t" event={"ID":"a06d5eb3-be79-41e9-9b6f-ad18672dca78","Type":"ContainerStarted","Data":"fe21560d7c72cb02e34394e69239195626ba1745b5bc1ad173c77a348b0a4815"} Feb 14 15:37:57 crc kubenswrapper[4750]: I0214 15:37:57.108700 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5v79s/must-gather-8q79t" podStartSLOduration=2.108671634 podStartE2EDuration="2.108671634s" podCreationTimestamp="2026-02-14 15:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 15:37:57.095800008 +0000 UTC m=+6349.121789509" watchObservedRunningTime="2026-02-14 15:37:57.108671634 +0000 UTC m=+6349.134661115" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.139825 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hg7cs"] Feb 14 15:37:59 crc kubenswrapper[4750]: E0214 15:37:59.140541 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerName="registry-server" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.140557 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerName="registry-server" Feb 14 15:37:59 crc kubenswrapper[4750]: E0214 15:37:59.140580 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerName="extract-content" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.140589 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerName="extract-content" Feb 14 15:37:59 crc kubenswrapper[4750]: E0214 15:37:59.140612 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerName="extract-utilities" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.140621 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerName="extract-utilities" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.140955 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3be7a6e-a418-444d-8f6d-677860b493e0" containerName="registry-server" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.146194 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.175684 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hg7cs"] Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.208104 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-catalog-content\") pod \"certified-operators-hg7cs\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.208523 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-utilities\") pod \"certified-operators-hg7cs\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.208956 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8jn\" (UniqueName: \"kubernetes.io/projected/05a63eca-2cc5-438a-9bb2-b86dbba7f805-kube-api-access-gs8jn\") pod \"certified-operators-hg7cs\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.314712 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-catalog-content\") pod \"certified-operators-hg7cs\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.314782 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-utilities\") pod \"certified-operators-hg7cs\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.314894 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs8jn\" (UniqueName: \"kubernetes.io/projected/05a63eca-2cc5-438a-9bb2-b86dbba7f805-kube-api-access-gs8jn\") pod \"certified-operators-hg7cs\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.315765 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-catalog-content\") pod \"certified-operators-hg7cs\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.315818 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-utilities\") pod \"certified-operators-hg7cs\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.335617 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs8jn\" (UniqueName: \"kubernetes.io/projected/05a63eca-2cc5-438a-9bb2-b86dbba7f805-kube-api-access-gs8jn\") pod \"certified-operators-hg7cs\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:37:59 crc kubenswrapper[4750]: I0214 15:37:59.477176 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:38:00 crc kubenswrapper[4750]: I0214 15:38:00.041812 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hg7cs"] Feb 14 15:38:00 crc kubenswrapper[4750]: I0214 15:38:00.115905 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hg7cs" event={"ID":"05a63eca-2cc5-438a-9bb2-b86dbba7f805","Type":"ContainerStarted","Data":"6d8efc8e3c071c17ab7b063a1cda2b0b25b6177e450520263c7429ca1a768921"} Feb 14 15:38:00 crc kubenswrapper[4750]: I0214 15:38:00.131477 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:38:00 crc kubenswrapper[4750]: I0214 15:38:00.131529 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:38:00 crc kubenswrapper[4750]: I0214 15:38:00.800684 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5v79s/crc-debug-t5zjj"] Feb 14 15:38:00 crc kubenswrapper[4750]: I0214 15:38:00.802845 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:00 crc kubenswrapper[4750]: I0214 15:38:00.805966 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5v79s"/"default-dockercfg-6g74z" Feb 14 15:38:00 crc kubenswrapper[4750]: I0214 15:38:00.958926 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jwkt\" (UniqueName: \"kubernetes.io/projected/06e1be5d-9e2d-4d66-a05c-b901a22a6853-kube-api-access-6jwkt\") pod \"crc-debug-t5zjj\" (UID: \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\") " pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:00 crc kubenswrapper[4750]: I0214 15:38:00.959454 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e1be5d-9e2d-4d66-a05c-b901a22a6853-host\") pod \"crc-debug-t5zjj\" (UID: \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\") " pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:01 crc kubenswrapper[4750]: I0214 15:38:01.061772 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jwkt\" (UniqueName: \"kubernetes.io/projected/06e1be5d-9e2d-4d66-a05c-b901a22a6853-kube-api-access-6jwkt\") pod \"crc-debug-t5zjj\" (UID: \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\") " pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:01 crc kubenswrapper[4750]: I0214 15:38:01.062141 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e1be5d-9e2d-4d66-a05c-b901a22a6853-host\") pod \"crc-debug-t5zjj\" (UID: \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\") " pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:01 crc kubenswrapper[4750]: I0214 15:38:01.063429 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e1be5d-9e2d-4d66-a05c-b901a22a6853-host\") pod \"crc-debug-t5zjj\" (UID: \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\") " pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:01 crc kubenswrapper[4750]: I0214 15:38:01.084299 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jwkt\" (UniqueName: \"kubernetes.io/projected/06e1be5d-9e2d-4d66-a05c-b901a22a6853-kube-api-access-6jwkt\") pod \"crc-debug-t5zjj\" (UID: \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\") " pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:01 crc kubenswrapper[4750]: I0214 15:38:01.124681 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:01 crc kubenswrapper[4750]: I0214 15:38:01.146816 4750 generic.go:334] "Generic (PLEG): container finished" podID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerID="2d3fa2eda790109d02bd1bcc93c0d7b7e33be148e7dc722067240d58cb276527" exitCode=0 Feb 14 15:38:01 crc kubenswrapper[4750]: I0214 15:38:01.147196 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hg7cs" event={"ID":"05a63eca-2cc5-438a-9bb2-b86dbba7f805","Type":"ContainerDied","Data":"2d3fa2eda790109d02bd1bcc93c0d7b7e33be148e7dc722067240d58cb276527"} Feb 14 15:38:01 crc kubenswrapper[4750]: W0214 15:38:01.196737 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e1be5d_9e2d_4d66_a05c_b901a22a6853.slice/crio-71b1df35e436b013febf9e9e2cdde36dfab06802c83b76b625dd64e17008781a WatchSource:0}: Error finding container 71b1df35e436b013febf9e9e2cdde36dfab06802c83b76b625dd64e17008781a: Status 404 returned error can't find the container with id 71b1df35e436b013febf9e9e2cdde36dfab06802c83b76b625dd64e17008781a Feb 14 15:38:02 crc kubenswrapper[4750]: I0214 15:38:02.160766 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/crc-debug-t5zjj" event={"ID":"06e1be5d-9e2d-4d66-a05c-b901a22a6853","Type":"ContainerStarted","Data":"3d1a9d4bd58677dc41b615f4036e811b45ee44b8b0bcf0d67665683afcba17bb"} Feb 14 15:38:02 crc kubenswrapper[4750]: I0214 15:38:02.162372 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/crc-debug-t5zjj" event={"ID":"06e1be5d-9e2d-4d66-a05c-b901a22a6853","Type":"ContainerStarted","Data":"71b1df35e436b013febf9e9e2cdde36dfab06802c83b76b625dd64e17008781a"} Feb 14 15:38:02 crc kubenswrapper[4750]: I0214 15:38:02.167456 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hg7cs" event={"ID":"05a63eca-2cc5-438a-9bb2-b86dbba7f805","Type":"ContainerStarted","Data":"a9769d6b82ca0dd2b672bd18a3c37845b9a7aa415c1dd3a4b2e1d1d680c0dfdb"} Feb 14 15:38:02 crc kubenswrapper[4750]: I0214 15:38:02.198286 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5v79s/crc-debug-t5zjj" podStartSLOduration=2.19826067 podStartE2EDuration="2.19826067s" podCreationTimestamp="2026-02-14 15:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 15:38:02.180892046 +0000 UTC m=+6354.206881527" watchObservedRunningTime="2026-02-14 15:38:02.19826067 +0000 UTC m=+6354.224250151" Feb 14 15:38:04 crc kubenswrapper[4750]: I0214 15:38:04.195042 4750 generic.go:334] "Generic (PLEG): container finished" podID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerID="a9769d6b82ca0dd2b672bd18a3c37845b9a7aa415c1dd3a4b2e1d1d680c0dfdb" exitCode=0 Feb 14 15:38:04 crc kubenswrapper[4750]: I0214 15:38:04.195153 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hg7cs" event={"ID":"05a63eca-2cc5-438a-9bb2-b86dbba7f805","Type":"ContainerDied","Data":"a9769d6b82ca0dd2b672bd18a3c37845b9a7aa415c1dd3a4b2e1d1d680c0dfdb"} Feb 14 15:38:05 crc kubenswrapper[4750]: I0214 15:38:05.215599 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hg7cs" event={"ID":"05a63eca-2cc5-438a-9bb2-b86dbba7f805","Type":"ContainerStarted","Data":"ec064589a945dbe9f167917179785c6aec8a86921cc1e719e9d4536f4f2196d7"} Feb 14 15:38:05 crc kubenswrapper[4750]: I0214 15:38:05.240727 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hg7cs" podStartSLOduration=2.81059277 podStartE2EDuration="6.24070699s" podCreationTimestamp="2026-02-14 15:37:59 +0000 UTC" firstStartedPulling="2026-02-14 15:38:01.162820874 +0000 UTC m=+6353.188810355" lastFinishedPulling="2026-02-14 15:38:04.592935094 +0000 UTC m=+6356.618924575" observedRunningTime="2026-02-14 15:38:05.235651787 +0000 UTC m=+6357.261641268" watchObservedRunningTime="2026-02-14 15:38:05.24070699 +0000 UTC m=+6357.266696471" Feb 14 15:38:09 crc kubenswrapper[4750]: I0214 15:38:09.477544 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:38:09 crc kubenswrapper[4750]: I0214 15:38:09.478175 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:38:10 crc kubenswrapper[4750]: I0214 15:38:10.527213 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-hg7cs" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerName="registry-server" probeResult="failure" output=< Feb 14 15:38:10 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:38:10 crc kubenswrapper[4750]: > Feb 14 15:38:19 crc kubenswrapper[4750]: I0214 15:38:19.536990 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:38:19 crc kubenswrapper[4750]: I0214 15:38:19.586738 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:38:19 crc kubenswrapper[4750]: I0214 15:38:19.777989 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hg7cs"] Feb 14 15:38:21 crc kubenswrapper[4750]: I0214 15:38:21.393803 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hg7cs" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerName="registry-server" containerID="cri-o://ec064589a945dbe9f167917179785c6aec8a86921cc1e719e9d4536f4f2196d7" gracePeriod=2 Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.406163 4750 generic.go:334] "Generic (PLEG): container finished" podID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerID="ec064589a945dbe9f167917179785c6aec8a86921cc1e719e9d4536f4f2196d7" exitCode=0 Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.406253 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hg7cs" event={"ID":"05a63eca-2cc5-438a-9bb2-b86dbba7f805","Type":"ContainerDied","Data":"ec064589a945dbe9f167917179785c6aec8a86921cc1e719e9d4536f4f2196d7"} Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.406433 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hg7cs" event={"ID":"05a63eca-2cc5-438a-9bb2-b86dbba7f805","Type":"ContainerDied","Data":"6d8efc8e3c071c17ab7b063a1cda2b0b25b6177e450520263c7429ca1a768921"} Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.406452 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d8efc8e3c071c17ab7b063a1cda2b0b25b6177e450520263c7429ca1a768921" Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.455464 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.530878 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-catalog-content\") pod \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.531168 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-utilities\") pod \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.531281 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs8jn\" (UniqueName: \"kubernetes.io/projected/05a63eca-2cc5-438a-9bb2-b86dbba7f805-kube-api-access-gs8jn\") pod \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\" (UID: \"05a63eca-2cc5-438a-9bb2-b86dbba7f805\") " Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.531652 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-utilities" (OuterVolumeSpecName: "utilities") pod "05a63eca-2cc5-438a-9bb2-b86dbba7f805" (UID: "05a63eca-2cc5-438a-9bb2-b86dbba7f805"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.532319 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.544277 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a63eca-2cc5-438a-9bb2-b86dbba7f805-kube-api-access-gs8jn" (OuterVolumeSpecName: "kube-api-access-gs8jn") pod "05a63eca-2cc5-438a-9bb2-b86dbba7f805" (UID: "05a63eca-2cc5-438a-9bb2-b86dbba7f805"). InnerVolumeSpecName "kube-api-access-gs8jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.598198 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05a63eca-2cc5-438a-9bb2-b86dbba7f805" (UID: "05a63eca-2cc5-438a-9bb2-b86dbba7f805"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.635905 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs8jn\" (UniqueName: \"kubernetes.io/projected/05a63eca-2cc5-438a-9bb2-b86dbba7f805-kube-api-access-gs8jn\") on node \"crc\" DevicePath \"\"" Feb 14 15:38:22 crc kubenswrapper[4750]: I0214 15:38:22.635943 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a63eca-2cc5-438a-9bb2-b86dbba7f805-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:38:23 crc kubenswrapper[4750]: I0214 15:38:23.414458 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hg7cs" Feb 14 15:38:23 crc kubenswrapper[4750]: I0214 15:38:23.438447 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hg7cs"] Feb 14 15:38:23 crc kubenswrapper[4750]: I0214 15:38:23.451159 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hg7cs"] Feb 14 15:38:24 crc kubenswrapper[4750]: I0214 15:38:24.776020 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" path="/var/lib/kubelet/pods/05a63eca-2cc5-438a-9bb2-b86dbba7f805/volumes" Feb 14 15:38:30 crc kubenswrapper[4750]: I0214 15:38:30.128821 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:38:30 crc kubenswrapper[4750]: I0214 15:38:30.129436 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:38:46 crc kubenswrapper[4750]: I0214 15:38:46.670955 4750 generic.go:334] "Generic (PLEG): container finished" podID="06e1be5d-9e2d-4d66-a05c-b901a22a6853" containerID="3d1a9d4bd58677dc41b615f4036e811b45ee44b8b0bcf0d67665683afcba17bb" exitCode=0 Feb 14 15:38:46 crc kubenswrapper[4750]: I0214 15:38:46.671051 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/crc-debug-t5zjj" event={"ID":"06e1be5d-9e2d-4d66-a05c-b901a22a6853","Type":"ContainerDied","Data":"3d1a9d4bd58677dc41b615f4036e811b45ee44b8b0bcf0d67665683afcba17bb"} Feb 14 15:38:47 crc kubenswrapper[4750]: I0214 15:38:47.834062 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:47 crc kubenswrapper[4750]: I0214 15:38:47.874802 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5v79s/crc-debug-t5zjj"] Feb 14 15:38:47 crc kubenswrapper[4750]: I0214 15:38:47.885596 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5v79s/crc-debug-t5zjj"] Feb 14 15:38:47 crc kubenswrapper[4750]: I0214 15:38:47.951433 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jwkt\" (UniqueName: \"kubernetes.io/projected/06e1be5d-9e2d-4d66-a05c-b901a22a6853-kube-api-access-6jwkt\") pod \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\" (UID: \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\") " Feb 14 15:38:47 crc kubenswrapper[4750]: I0214 15:38:47.951573 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e1be5d-9e2d-4d66-a05c-b901a22a6853-host\") pod \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\" (UID: \"06e1be5d-9e2d-4d66-a05c-b901a22a6853\") " Feb 14 15:38:47 crc kubenswrapper[4750]: I0214 15:38:47.951739 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06e1be5d-9e2d-4d66-a05c-b901a22a6853-host" (OuterVolumeSpecName: "host") pod "06e1be5d-9e2d-4d66-a05c-b901a22a6853" (UID: "06e1be5d-9e2d-4d66-a05c-b901a22a6853"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 15:38:47 crc kubenswrapper[4750]: I0214 15:38:47.952493 4750 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e1be5d-9e2d-4d66-a05c-b901a22a6853-host\") on node \"crc\" DevicePath \"\"" Feb 14 15:38:47 crc kubenswrapper[4750]: I0214 15:38:47.959478 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e1be5d-9e2d-4d66-a05c-b901a22a6853-kube-api-access-6jwkt" (OuterVolumeSpecName: "kube-api-access-6jwkt") pod "06e1be5d-9e2d-4d66-a05c-b901a22a6853" (UID: "06e1be5d-9e2d-4d66-a05c-b901a22a6853"). InnerVolumeSpecName "kube-api-access-6jwkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:38:48 crc kubenswrapper[4750]: I0214 15:38:48.055071 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jwkt\" (UniqueName: \"kubernetes.io/projected/06e1be5d-9e2d-4d66-a05c-b901a22a6853-kube-api-access-6jwkt\") on node \"crc\" DevicePath \"\"" Feb 14 15:38:48 crc kubenswrapper[4750]: I0214 15:38:48.691682 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71b1df35e436b013febf9e9e2cdde36dfab06802c83b76b625dd64e17008781a" Feb 14 15:38:48 crc kubenswrapper[4750]: I0214 15:38:48.692010 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-t5zjj" Feb 14 15:38:48 crc kubenswrapper[4750]: I0214 15:38:48.767974 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e1be5d-9e2d-4d66-a05c-b901a22a6853" path="/var/lib/kubelet/pods/06e1be5d-9e2d-4d66-a05c-b901a22a6853/volumes" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.140577 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5v79s/crc-debug-z7wxp"] Feb 14 15:38:49 crc kubenswrapper[4750]: E0214 15:38:49.141396 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerName="extract-content" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.141418 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerName="extract-content" Feb 14 15:38:49 crc kubenswrapper[4750]: E0214 15:38:49.141445 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerName="extract-utilities" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.141454 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerName="extract-utilities" Feb 14 15:38:49 crc kubenswrapper[4750]: E0214 15:38:49.141492 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e1be5d-9e2d-4d66-a05c-b901a22a6853" containerName="container-00" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.141500 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e1be5d-9e2d-4d66-a05c-b901a22a6853" containerName="container-00" Feb 14 15:38:49 crc kubenswrapper[4750]: E0214 15:38:49.141512 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerName="registry-server" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.141519 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerName="registry-server" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.141890 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e1be5d-9e2d-4d66-a05c-b901a22a6853" containerName="container-00" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.141916 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a63eca-2cc5-438a-9bb2-b86dbba7f805" containerName="registry-server" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.142947 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.145366 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5v79s"/"default-dockercfg-6g74z" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.301274 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1efb657-6fcb-48fe-9062-3b7a71f383af-host\") pod \"crc-debug-z7wxp\" (UID: \"e1efb657-6fcb-48fe-9062-3b7a71f383af\") " pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.301565 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7689h\" (UniqueName: \"kubernetes.io/projected/e1efb657-6fcb-48fe-9062-3b7a71f383af-kube-api-access-7689h\") pod \"crc-debug-z7wxp\" (UID: \"e1efb657-6fcb-48fe-9062-3b7a71f383af\") " pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.403507 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7689h\" (UniqueName: \"kubernetes.io/projected/e1efb657-6fcb-48fe-9062-3b7a71f383af-kube-api-access-7689h\") pod \"crc-debug-z7wxp\" (UID: \"e1efb657-6fcb-48fe-9062-3b7a71f383af\") " pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.403619 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1efb657-6fcb-48fe-9062-3b7a71f383af-host\") pod \"crc-debug-z7wxp\" (UID: \"e1efb657-6fcb-48fe-9062-3b7a71f383af\") " pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.403741 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1efb657-6fcb-48fe-9062-3b7a71f383af-host\") pod \"crc-debug-z7wxp\" (UID: \"e1efb657-6fcb-48fe-9062-3b7a71f383af\") " pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.424257 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7689h\" (UniqueName: \"kubernetes.io/projected/e1efb657-6fcb-48fe-9062-3b7a71f383af-kube-api-access-7689h\") pod \"crc-debug-z7wxp\" (UID: \"e1efb657-6fcb-48fe-9062-3b7a71f383af\") " pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.462551 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:49 crc kubenswrapper[4750]: I0214 15:38:49.704155 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/crc-debug-z7wxp" event={"ID":"e1efb657-6fcb-48fe-9062-3b7a71f383af","Type":"ContainerStarted","Data":"87d03f7970314a8ea4afc4e97e38749b37316780ba76fee0241f57e163acca91"} Feb 14 15:38:50 crc kubenswrapper[4750]: I0214 15:38:50.717184 4750 generic.go:334] "Generic (PLEG): container finished" podID="e1efb657-6fcb-48fe-9062-3b7a71f383af" containerID="14945fb802b3d19e90d68ef6c64a7998bab5e354d98731335368f7614b91d0b7" exitCode=0 Feb 14 15:38:50 crc kubenswrapper[4750]: I0214 15:38:50.717291 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/crc-debug-z7wxp" event={"ID":"e1efb657-6fcb-48fe-9062-3b7a71f383af","Type":"ContainerDied","Data":"14945fb802b3d19e90d68ef6c64a7998bab5e354d98731335368f7614b91d0b7"} Feb 14 15:38:51 crc kubenswrapper[4750]: I0214 15:38:51.875871 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:51 crc kubenswrapper[4750]: I0214 15:38:51.959550 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1efb657-6fcb-48fe-9062-3b7a71f383af-host\") pod \"e1efb657-6fcb-48fe-9062-3b7a71f383af\" (UID: \"e1efb657-6fcb-48fe-9062-3b7a71f383af\") " Feb 14 15:38:51 crc kubenswrapper[4750]: I0214 15:38:51.959953 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7689h\" (UniqueName: \"kubernetes.io/projected/e1efb657-6fcb-48fe-9062-3b7a71f383af-kube-api-access-7689h\") pod \"e1efb657-6fcb-48fe-9062-3b7a71f383af\" (UID: \"e1efb657-6fcb-48fe-9062-3b7a71f383af\") " Feb 14 15:38:51 crc kubenswrapper[4750]: I0214 15:38:51.959625 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1efb657-6fcb-48fe-9062-3b7a71f383af-host" (OuterVolumeSpecName: "host") pod "e1efb657-6fcb-48fe-9062-3b7a71f383af" (UID: "e1efb657-6fcb-48fe-9062-3b7a71f383af"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 15:38:51 crc kubenswrapper[4750]: I0214 15:38:51.960878 4750 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1efb657-6fcb-48fe-9062-3b7a71f383af-host\") on node \"crc\" DevicePath \"\"" Feb 14 15:38:51 crc kubenswrapper[4750]: I0214 15:38:51.965973 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1efb657-6fcb-48fe-9062-3b7a71f383af-kube-api-access-7689h" (OuterVolumeSpecName: "kube-api-access-7689h") pod "e1efb657-6fcb-48fe-9062-3b7a71f383af" (UID: "e1efb657-6fcb-48fe-9062-3b7a71f383af"). InnerVolumeSpecName "kube-api-access-7689h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:38:52 crc kubenswrapper[4750]: I0214 15:38:52.062556 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7689h\" (UniqueName: \"kubernetes.io/projected/e1efb657-6fcb-48fe-9062-3b7a71f383af-kube-api-access-7689h\") on node \"crc\" DevicePath \"\"" Feb 14 15:38:52 crc kubenswrapper[4750]: I0214 15:38:52.763172 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/crc-debug-z7wxp" event={"ID":"e1efb657-6fcb-48fe-9062-3b7a71f383af","Type":"ContainerDied","Data":"87d03f7970314a8ea4afc4e97e38749b37316780ba76fee0241f57e163acca91"} Feb 14 15:38:52 crc kubenswrapper[4750]: I0214 15:38:52.763417 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87d03f7970314a8ea4afc4e97e38749b37316780ba76fee0241f57e163acca91" Feb 14 15:38:52 crc kubenswrapper[4750]: I0214 15:38:52.763469 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-z7wxp" Feb 14 15:38:53 crc kubenswrapper[4750]: I0214 15:38:53.062827 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5v79s/crc-debug-z7wxp"] Feb 14 15:38:53 crc kubenswrapper[4750]: I0214 15:38:53.071987 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5v79s/crc-debug-z7wxp"] Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.306893 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5v79s/crc-debug-qkklf"] Feb 14 15:38:54 crc kubenswrapper[4750]: E0214 15:38:54.307909 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1efb657-6fcb-48fe-9062-3b7a71f383af" containerName="container-00" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.307928 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1efb657-6fcb-48fe-9062-3b7a71f383af" containerName="container-00" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.308366 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1efb657-6fcb-48fe-9062-3b7a71f383af" containerName="container-00" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.309464 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.311357 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5v79s"/"default-dockercfg-6g74z" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.413978 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85ff298d-9368-43dd-986c-3a2d91ed781d-host\") pod \"crc-debug-qkklf\" (UID: \"85ff298d-9368-43dd-986c-3a2d91ed781d\") " pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.414263 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt46z\" (UniqueName: \"kubernetes.io/projected/85ff298d-9368-43dd-986c-3a2d91ed781d-kube-api-access-jt46z\") pod \"crc-debug-qkklf\" (UID: \"85ff298d-9368-43dd-986c-3a2d91ed781d\") " pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.516781 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt46z\" (UniqueName: \"kubernetes.io/projected/85ff298d-9368-43dd-986c-3a2d91ed781d-kube-api-access-jt46z\") pod \"crc-debug-qkklf\" (UID: \"85ff298d-9368-43dd-986c-3a2d91ed781d\") " pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.516887 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85ff298d-9368-43dd-986c-3a2d91ed781d-host\") pod \"crc-debug-qkklf\" (UID: \"85ff298d-9368-43dd-986c-3a2d91ed781d\") " pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.517160 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85ff298d-9368-43dd-986c-3a2d91ed781d-host\") pod \"crc-debug-qkklf\" (UID: \"85ff298d-9368-43dd-986c-3a2d91ed781d\") " pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.537941 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt46z\" (UniqueName: \"kubernetes.io/projected/85ff298d-9368-43dd-986c-3a2d91ed781d-kube-api-access-jt46z\") pod \"crc-debug-qkklf\" (UID: \"85ff298d-9368-43dd-986c-3a2d91ed781d\") " pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.629448 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:54 crc kubenswrapper[4750]: W0214 15:38:54.668760 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85ff298d_9368_43dd_986c_3a2d91ed781d.slice/crio-ce600ed08003801aebdcc73a1280b4dea38b2cccd48a3b1eb97c3613bcd4f97d WatchSource:0}: Error finding container ce600ed08003801aebdcc73a1280b4dea38b2cccd48a3b1eb97c3613bcd4f97d: Status 404 returned error can't find the container with id ce600ed08003801aebdcc73a1280b4dea38b2cccd48a3b1eb97c3613bcd4f97d Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.759066 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1efb657-6fcb-48fe-9062-3b7a71f383af" path="/var/lib/kubelet/pods/e1efb657-6fcb-48fe-9062-3b7a71f383af/volumes" Feb 14 15:38:54 crc kubenswrapper[4750]: I0214 15:38:54.791824 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/crc-debug-qkklf" event={"ID":"85ff298d-9368-43dd-986c-3a2d91ed781d","Type":"ContainerStarted","Data":"ce600ed08003801aebdcc73a1280b4dea38b2cccd48a3b1eb97c3613bcd4f97d"} Feb 14 15:38:55 crc kubenswrapper[4750]: I0214 15:38:55.802401 4750 generic.go:334] "Generic (PLEG): container finished" podID="85ff298d-9368-43dd-986c-3a2d91ed781d" containerID="d59bbd177ffc348d98eb480a38de02df3a2e69ff3680ed8186746c8f5b067e36" exitCode=0 Feb 14 15:38:55 crc kubenswrapper[4750]: I0214 15:38:55.802457 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/crc-debug-qkklf" event={"ID":"85ff298d-9368-43dd-986c-3a2d91ed781d","Type":"ContainerDied","Data":"d59bbd177ffc348d98eb480a38de02df3a2e69ff3680ed8186746c8f5b067e36"} Feb 14 15:38:55 crc kubenswrapper[4750]: I0214 15:38:55.855195 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5v79s/crc-debug-qkklf"] Feb 14 15:38:55 crc kubenswrapper[4750]: I0214 15:38:55.867884 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5v79s/crc-debug-qkklf"] Feb 14 15:38:56 crc kubenswrapper[4750]: I0214 15:38:56.925671 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:56 crc kubenswrapper[4750]: I0214 15:38:56.969397 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85ff298d-9368-43dd-986c-3a2d91ed781d-host\") pod \"85ff298d-9368-43dd-986c-3a2d91ed781d\" (UID: \"85ff298d-9368-43dd-986c-3a2d91ed781d\") " Feb 14 15:38:56 crc kubenswrapper[4750]: I0214 15:38:56.969622 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt46z\" (UniqueName: \"kubernetes.io/projected/85ff298d-9368-43dd-986c-3a2d91ed781d-kube-api-access-jt46z\") pod \"85ff298d-9368-43dd-986c-3a2d91ed781d\" (UID: \"85ff298d-9368-43dd-986c-3a2d91ed781d\") " Feb 14 15:38:56 crc kubenswrapper[4750]: I0214 15:38:56.970050 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85ff298d-9368-43dd-986c-3a2d91ed781d-host" (OuterVolumeSpecName: "host") pod "85ff298d-9368-43dd-986c-3a2d91ed781d" (UID: "85ff298d-9368-43dd-986c-3a2d91ed781d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 15:38:56 crc kubenswrapper[4750]: I0214 15:38:56.970431 4750 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85ff298d-9368-43dd-986c-3a2d91ed781d-host\") on node \"crc\" DevicePath \"\"" Feb 14 15:38:56 crc kubenswrapper[4750]: I0214 15:38:56.975629 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ff298d-9368-43dd-986c-3a2d91ed781d-kube-api-access-jt46z" (OuterVolumeSpecName: "kube-api-access-jt46z") pod "85ff298d-9368-43dd-986c-3a2d91ed781d" (UID: "85ff298d-9368-43dd-986c-3a2d91ed781d"). InnerVolumeSpecName "kube-api-access-jt46z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:38:57 crc kubenswrapper[4750]: I0214 15:38:57.073397 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt46z\" (UniqueName: \"kubernetes.io/projected/85ff298d-9368-43dd-986c-3a2d91ed781d-kube-api-access-jt46z\") on node \"crc\" DevicePath \"\"" Feb 14 15:38:57 crc kubenswrapper[4750]: I0214 15:38:57.851405 4750 scope.go:117] "RemoveContainer" containerID="d59bbd177ffc348d98eb480a38de02df3a2e69ff3680ed8186746c8f5b067e36" Feb 14 15:38:57 crc kubenswrapper[4750]: I0214 15:38:57.851577 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/crc-debug-qkklf" Feb 14 15:38:58 crc kubenswrapper[4750]: I0214 15:38:58.757736 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ff298d-9368-43dd-986c-3a2d91ed781d" path="/var/lib/kubelet/pods/85ff298d-9368-43dd-986c-3a2d91ed781d/volumes" Feb 14 15:38:58 crc kubenswrapper[4750]: I0214 15:38:58.860310 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nh8zt"] Feb 14 15:38:58 crc kubenswrapper[4750]: E0214 15:38:58.861070 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ff298d-9368-43dd-986c-3a2d91ed781d" containerName="container-00" Feb 14 15:38:58 crc kubenswrapper[4750]: I0214 15:38:58.861097 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ff298d-9368-43dd-986c-3a2d91ed781d" containerName="container-00" Feb 14 15:38:58 crc kubenswrapper[4750]: I0214 15:38:58.861478 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ff298d-9368-43dd-986c-3a2d91ed781d" containerName="container-00" Feb 14 15:38:58 crc kubenswrapper[4750]: I0214 15:38:58.863375 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:58 crc kubenswrapper[4750]: I0214 15:38:58.885183 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh8zt"] Feb 14 15:38:58 crc kubenswrapper[4750]: I0214 15:38:58.930680 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-catalog-content\") pod \"community-operators-nh8zt\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:58 crc kubenswrapper[4750]: I0214 15:38:58.930975 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-utilities\") pod \"community-operators-nh8zt\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:58 crc kubenswrapper[4750]: I0214 15:38:58.931028 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hmgs\" (UniqueName: \"kubernetes.io/projected/6381bdfb-fa44-48b1-85db-08101faee67d-kube-api-access-8hmgs\") pod \"community-operators-nh8zt\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:59 crc kubenswrapper[4750]: I0214 15:38:59.033470 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-catalog-content\") pod \"community-operators-nh8zt\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:59 crc kubenswrapper[4750]: I0214 15:38:59.033544 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-utilities\") pod \"community-operators-nh8zt\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:59 crc kubenswrapper[4750]: I0214 15:38:59.033600 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hmgs\" (UniqueName: \"kubernetes.io/projected/6381bdfb-fa44-48b1-85db-08101faee67d-kube-api-access-8hmgs\") pod \"community-operators-nh8zt\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:59 crc kubenswrapper[4750]: I0214 15:38:59.034103 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-catalog-content\") pod \"community-operators-nh8zt\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:59 crc kubenswrapper[4750]: I0214 15:38:59.034137 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-utilities\") pod \"community-operators-nh8zt\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:59 crc kubenswrapper[4750]: I0214 15:38:59.061402 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hmgs\" (UniqueName: \"kubernetes.io/projected/6381bdfb-fa44-48b1-85db-08101faee67d-kube-api-access-8hmgs\") pod \"community-operators-nh8zt\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:59 crc kubenswrapper[4750]: I0214 15:38:59.189419 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:38:59 crc kubenswrapper[4750]: I0214 15:38:59.705980 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh8zt"] Feb 14 15:38:59 crc kubenswrapper[4750]: W0214 15:38:59.709732 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6381bdfb_fa44_48b1_85db_08101faee67d.slice/crio-dc6313fa2b36120458e99760f7e8a3773d6dfd801bfe71d1bc021af5d00725f4 WatchSource:0}: Error finding container dc6313fa2b36120458e99760f7e8a3773d6dfd801bfe71d1bc021af5d00725f4: Status 404 returned error can't find the container with id dc6313fa2b36120458e99760f7e8a3773d6dfd801bfe71d1bc021af5d00725f4 Feb 14 15:38:59 crc kubenswrapper[4750]: I0214 15:38:59.889716 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8zt" event={"ID":"6381bdfb-fa44-48b1-85db-08101faee67d","Type":"ContainerStarted","Data":"dc6313fa2b36120458e99760f7e8a3773d6dfd801bfe71d1bc021af5d00725f4"} Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.128635 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.129005 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.129059 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.130000 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.130059 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" gracePeriod=600 Feb 14 15:39:00 crc kubenswrapper[4750]: E0214 15:39:00.220879 4750 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6381bdfb_fa44_48b1_85db_08101faee67d.slice/crio-d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333.scope\": RecentStats: unable to find data in memory cache]" Feb 14 15:39:00 crc kubenswrapper[4750]: E0214 15:39:00.278474 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.902343 4750 generic.go:334] "Generic (PLEG): container finished" podID="6381bdfb-fa44-48b1-85db-08101faee67d" containerID="d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333" exitCode=0 Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.902414 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8zt" event={"ID":"6381bdfb-fa44-48b1-85db-08101faee67d","Type":"ContainerDied","Data":"d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333"} Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.907334 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" exitCode=0 Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.907387 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0"} Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.907418 4750 scope.go:117] "RemoveContainer" containerID="4f4bdec9fd2c22d705b7ae9c5d115cf1eb8ca3b9e424085f60383ba37f38e84a" Feb 14 15:39:00 crc kubenswrapper[4750]: I0214 15:39:00.908226 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:39:00 crc kubenswrapper[4750]: E0214 15:39:00.908564 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:39:01 crc kubenswrapper[4750]: I0214 15:39:01.927582 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8zt" event={"ID":"6381bdfb-fa44-48b1-85db-08101faee67d","Type":"ContainerStarted","Data":"0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b"} Feb 14 15:39:03 crc kubenswrapper[4750]: I0214 15:39:03.961434 4750 generic.go:334] "Generic (PLEG): container finished" podID="6381bdfb-fa44-48b1-85db-08101faee67d" containerID="0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b" exitCode=0 Feb 14 15:39:03 crc kubenswrapper[4750]: I0214 15:39:03.961765 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8zt" event={"ID":"6381bdfb-fa44-48b1-85db-08101faee67d","Type":"ContainerDied","Data":"0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b"} Feb 14 15:39:04 crc kubenswrapper[4750]: I0214 15:39:04.973924 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8zt" event={"ID":"6381bdfb-fa44-48b1-85db-08101faee67d","Type":"ContainerStarted","Data":"03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730"} Feb 14 15:39:05 crc kubenswrapper[4750]: I0214 15:39:05.000578 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nh8zt" podStartSLOduration=3.553863839 podStartE2EDuration="7.00055958s" podCreationTimestamp="2026-02-14 15:38:58 +0000 UTC" firstStartedPulling="2026-02-14 15:39:00.905665661 +0000 UTC m=+6412.931655142" lastFinishedPulling="2026-02-14 15:39:04.352361402 +0000 UTC m=+6416.378350883" observedRunningTime="2026-02-14 15:39:04.99315886 +0000 UTC m=+6417.019148341" watchObservedRunningTime="2026-02-14 15:39:05.00055958 +0000 UTC m=+6417.026549051" Feb 14 15:39:09 crc kubenswrapper[4750]: I0214 15:39:09.190281 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:39:09 crc kubenswrapper[4750]: I0214 15:39:09.192085 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:39:09 crc kubenswrapper[4750]: I0214 15:39:09.244827 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:39:10 crc kubenswrapper[4750]: I0214 15:39:10.105953 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:39:10 crc kubenswrapper[4750]: I0214 15:39:10.160727 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh8zt"] Feb 14 15:39:12 crc kubenswrapper[4750]: I0214 15:39:12.063529 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nh8zt" podUID="6381bdfb-fa44-48b1-85db-08101faee67d" containerName="registry-server" containerID="cri-o://03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730" gracePeriod=2 Feb 14 15:39:12 crc kubenswrapper[4750]: I0214 15:39:12.934431 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.028812 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-utilities\") pod \"6381bdfb-fa44-48b1-85db-08101faee67d\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.028886 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-catalog-content\") pod \"6381bdfb-fa44-48b1-85db-08101faee67d\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.029399 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hmgs\" (UniqueName: \"kubernetes.io/projected/6381bdfb-fa44-48b1-85db-08101faee67d-kube-api-access-8hmgs\") pod \"6381bdfb-fa44-48b1-85db-08101faee67d\" (UID: \"6381bdfb-fa44-48b1-85db-08101faee67d\") " Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.029557 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-utilities" (OuterVolumeSpecName: "utilities") pod "6381bdfb-fa44-48b1-85db-08101faee67d" (UID: "6381bdfb-fa44-48b1-85db-08101faee67d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.030329 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.036588 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6381bdfb-fa44-48b1-85db-08101faee67d-kube-api-access-8hmgs" (OuterVolumeSpecName: "kube-api-access-8hmgs") pod "6381bdfb-fa44-48b1-85db-08101faee67d" (UID: "6381bdfb-fa44-48b1-85db-08101faee67d"). InnerVolumeSpecName "kube-api-access-8hmgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.073856 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6381bdfb-fa44-48b1-85db-08101faee67d" (UID: "6381bdfb-fa44-48b1-85db-08101faee67d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.075828 4750 generic.go:334] "Generic (PLEG): container finished" podID="6381bdfb-fa44-48b1-85db-08101faee67d" containerID="03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730" exitCode=0 Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.075866 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8zt" event={"ID":"6381bdfb-fa44-48b1-85db-08101faee67d","Type":"ContainerDied","Data":"03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730"} Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.075893 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8zt" event={"ID":"6381bdfb-fa44-48b1-85db-08101faee67d","Type":"ContainerDied","Data":"dc6313fa2b36120458e99760f7e8a3773d6dfd801bfe71d1bc021af5d00725f4"} Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.075909 4750 scope.go:117] "RemoveContainer" containerID="03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.076050 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh8zt" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.133065 4750 scope.go:117] "RemoveContainer" containerID="0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.147868 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hmgs\" (UniqueName: \"kubernetes.io/projected/6381bdfb-fa44-48b1-85db-08101faee67d-kube-api-access-8hmgs\") on node \"crc\" DevicePath \"\"" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.147909 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6381bdfb-fa44-48b1-85db-08101faee67d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.169978 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh8zt"] Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.183366 4750 scope.go:117] "RemoveContainer" containerID="d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.187155 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nh8zt"] Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.250381 4750 scope.go:117] "RemoveContainer" containerID="03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730" Feb 14 15:39:13 crc kubenswrapper[4750]: E0214 15:39:13.251470 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730\": container with ID starting with 03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730 not found: ID does not exist" containerID="03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.251515 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730"} err="failed to get container status \"03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730\": rpc error: code = NotFound desc = could not find container \"03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730\": container with ID starting with 03c4ee3acb92187ea99b1f7e9dabfce96193f0520f141f42285a32ce91cb0730 not found: ID does not exist" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.251542 4750 scope.go:117] "RemoveContainer" containerID="0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b" Feb 14 15:39:13 crc kubenswrapper[4750]: E0214 15:39:13.253369 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b\": container with ID starting with 0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b not found: ID does not exist" containerID="0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.253408 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b"} err="failed to get container status \"0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b\": rpc error: code = NotFound desc = could not find container \"0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b\": container with ID starting with 0cff914805fe69f7e2771a8d99a7635bd7b4626ced9458055a22a41554b87c8b not found: ID does not exist" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.253436 4750 scope.go:117] "RemoveContainer" containerID="d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333" Feb 14 15:39:13 crc kubenswrapper[4750]: E0214 15:39:13.253904 4750 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333\": container with ID starting with d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333 not found: ID does not exist" containerID="d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333" Feb 14 15:39:13 crc kubenswrapper[4750]: I0214 15:39:13.253934 4750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333"} err="failed to get container status \"d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333\": rpc error: code = NotFound desc = could not find container \"d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333\": container with ID starting with d606b260076fe5286cdec61ee3393ab980a074abd5528758ae62022be8820333 not found: ID does not exist" Feb 14 15:39:14 crc kubenswrapper[4750]: I0214 15:39:14.769363 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6381bdfb-fa44-48b1-85db-08101faee67d" path="/var/lib/kubelet/pods/6381bdfb-fa44-48b1-85db-08101faee67d/volumes" Feb 14 15:39:15 crc kubenswrapper[4750]: I0214 15:39:15.741789 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:39:15 crc kubenswrapper[4750]: E0214 15:39:15.742083 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:39:30 crc kubenswrapper[4750]: I0214 15:39:30.743460 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:39:30 crc kubenswrapper[4750]: E0214 15:39:30.744441 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:39:39 crc kubenswrapper[4750]: I0214 15:39:39.981174 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4fbf10ca-3c0c-4779-b08f-212a47db3302/aodh-api/0.log" Feb 14 15:39:40 crc kubenswrapper[4750]: I0214 15:39:40.190750 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4fbf10ca-3c0c-4779-b08f-212a47db3302/aodh-evaluator/0.log" Feb 14 15:39:40 crc kubenswrapper[4750]: I0214 15:39:40.196367 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4fbf10ca-3c0c-4779-b08f-212a47db3302/aodh-notifier/0.log" Feb 14 15:39:40 crc kubenswrapper[4750]: I0214 15:39:40.213155 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4fbf10ca-3c0c-4779-b08f-212a47db3302/aodh-listener/0.log" Feb 14 15:39:40 crc kubenswrapper[4750]: I0214 15:39:40.532233 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66b79d5688-qxq94_ba0aa13e-484e-48c3-9326-f606f3f5d98c/barbican-api-log/0.log" Feb 14 15:39:40 crc kubenswrapper[4750]: I0214 15:39:40.558733 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66b79d5688-qxq94_ba0aa13e-484e-48c3-9326-f606f3f5d98c/barbican-api/0.log" Feb 14 15:39:40 crc kubenswrapper[4750]: I0214 15:39:40.704858 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fdccf6dd6-ftmgl_cd37e6f7-6e18-4587-8237-234b4d5cf12a/barbican-keystone-listener/0.log" Feb 14 15:39:40 crc kubenswrapper[4750]: I0214 15:39:40.853909 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fdccf6dd6-ftmgl_cd37e6f7-6e18-4587-8237-234b4d5cf12a/barbican-keystone-listener-log/0.log" Feb 14 15:39:40 crc kubenswrapper[4750]: I0214 15:39:40.898665 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77f9f44bff-kmkdt_35b85269-938a-4bc4-8321-f11d72214b39/barbican-worker/0.log" Feb 14 15:39:40 crc kubenswrapper[4750]: I0214 15:39:40.927029 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77f9f44bff-kmkdt_35b85269-938a-4bc4-8321-f11d72214b39/barbican-worker-log/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.089686 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kgjzq_35713168-58fa-49ee-8783-2631f53b02a9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.300987 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b67f7611-5ec3-4e68-86cf-52b26c4e3b1f/ceilometer-central-agent/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.379936 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b67f7611-5ec3-4e68-86cf-52b26c4e3b1f/proxy-httpd/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.381389 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b67f7611-5ec3-4e68-86cf-52b26c4e3b1f/ceilometer-notification-agent/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.449703 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b67f7611-5ec3-4e68-86cf-52b26c4e3b1f/sg-core/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.594397 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ecffb890-5905-4fc1-a005-86519c0c6aea/cinder-api-log/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.709124 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ecffb890-5905-4fc1-a005-86519c0c6aea/cinder-api/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.879061 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_41bd633c-6afd-4c10-a933-287724b60a3d/probe/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.887580 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_41bd633c-6afd-4c10-a933-287724b60a3d/cinder-scheduler/0.log" Feb 14 15:39:41 crc kubenswrapper[4750]: I0214 15:39:41.998039 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-r86mv_bce32f17-aa17-4e19-bc8f-05b9f58cf140/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:42 crc kubenswrapper[4750]: I0214 15:39:42.155998 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-69zmg_afa7c9b6-0f84-49e3-9e1d-667b2ff99d34/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:42 crc kubenswrapper[4750]: I0214 15:39:42.276302 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l7bxg_1d5fe023-9111-4eb5-af17-2efdd4b3a354/init/0.log" Feb 14 15:39:42 crc kubenswrapper[4750]: I0214 15:39:42.449142 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l7bxg_1d5fe023-9111-4eb5-af17-2efdd4b3a354/init/0.log" Feb 14 15:39:42 crc kubenswrapper[4750]: I0214 15:39:42.496912 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tlvg8_3586119e-2daa-4e61-8c43-8e3a9c455ab5/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:42 crc kubenswrapper[4750]: I0214 15:39:42.550802 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l7bxg_1d5fe023-9111-4eb5-af17-2efdd4b3a354/dnsmasq-dns/0.log" Feb 14 15:39:42 crc kubenswrapper[4750]: I0214 15:39:42.708195 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fe85d9dd-19fc-4155-af2c-62cc62eb029c/glance-log/0.log" Feb 14 15:39:42 crc kubenswrapper[4750]: I0214 15:39:42.728001 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fe85d9dd-19fc-4155-af2c-62cc62eb029c/glance-httpd/0.log" Feb 14 15:39:42 crc kubenswrapper[4750]: I0214 15:39:42.954121 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_27ce82d4-dcbe-48fe-8b91-8704ef172bf1/glance-log/0.log" Feb 14 15:39:42 crc kubenswrapper[4750]: I0214 15:39:42.961925 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_27ce82d4-dcbe-48fe-8b91-8704ef172bf1/glance-httpd/0.log" Feb 14 15:39:43 crc kubenswrapper[4750]: I0214 15:39:43.710404 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5cddcdd877-f796q_d7bb0ba4-eebd-41f7-935f-b9ba2a635618/heat-api/0.log" Feb 14 15:39:43 crc kubenswrapper[4750]: I0214 15:39:43.904857 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6fd8cd6df7-qsknx_5bfd32d3-f265-4016-b4d0-d8c8d65b5ca5/heat-cfnapi/0.log" Feb 14 15:39:43 crc kubenswrapper[4750]: I0214 15:39:43.972076 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-d98b7d7bf-rc97s_7e8766e8-73cb-45c8-bd9c-865b3fa0ccdf/heat-engine/0.log" Feb 14 15:39:43 crc kubenswrapper[4750]: I0214 15:39:43.993386 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7xwf6_a0fe9116-89eb-49c2-a659-2dfdfe1c885a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:44 crc kubenswrapper[4750]: I0214 15:39:44.104639 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pqhr7_8af7ba28-4efa-4a07-9199-c3c64c043543/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:44 crc kubenswrapper[4750]: I0214 15:39:44.202680 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29518021-6v549_4718c9f5-bcf2-48a7-bd19-6a97de6ed02a/keystone-cron/0.log" Feb 14 15:39:44 crc kubenswrapper[4750]: I0214 15:39:44.431176 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_98683c54-5137-4357-be49-f22cdf9715db/kube-state-metrics/0.log" Feb 14 15:39:44 crc kubenswrapper[4750]: I0214 15:39:44.519134 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hggq4_7184cd06-d52e-49d6-9a58-520b47303252/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:44 crc kubenswrapper[4750]: I0214 15:39:44.595434 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7bb6cf9d49-hj8cz_42f54e99-7974-44a5-9796-5ab9a50db818/keystone-api/0.log" Feb 14 15:39:44 crc kubenswrapper[4750]: I0214 15:39:44.644279 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-cdxns_6f331906-e9ac-4779-b6a3-28ac233ab472/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:44 crc kubenswrapper[4750]: I0214 15:39:44.746422 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:39:44 crc kubenswrapper[4750]: E0214 15:39:44.747644 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:39:44 crc kubenswrapper[4750]: I0214 15:39:44.864687 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_1aaa3ab6-4225-441d-b5b9-85ec8d30ca01/mysqld-exporter/0.log" Feb 14 15:39:45 crc kubenswrapper[4750]: I0214 15:39:45.206780 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b96685565-flxmp_dbfc4c3a-8875-43db-8ca1-e829524d280f/neutron-api/0.log" Feb 14 15:39:45 crc kubenswrapper[4750]: I0214 15:39:45.256838 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b96685565-flxmp_dbfc4c3a-8875-43db-8ca1-e829524d280f/neutron-httpd/0.log" Feb 14 15:39:45 crc kubenswrapper[4750]: I0214 15:39:45.277611 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7ffrb_868da7c8-8b42-419d-9801-06c947d3333c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:45 crc kubenswrapper[4750]: I0214 15:39:45.966231 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_76bccfd8-7ab4-4daa-b272-188438293cf7/nova-cell0-conductor-conductor/0.log" Feb 14 15:39:46 crc kubenswrapper[4750]: I0214 15:39:46.189672 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8ed4f6ec-3953-48de-a051-2af04cdafeb4/nova-api-log/0.log" Feb 14 15:39:46 crc kubenswrapper[4750]: I0214 15:39:46.329808 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8e8a5550-ea14-49fa-ae9d-b38d05a23254/nova-cell1-conductor-conductor/0.log" Feb 14 15:39:46 crc kubenswrapper[4750]: I0214 15:39:46.598573 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7ccaea34-1af0-4e5e-9771-bf53272bab57/nova-cell1-novncproxy-novncproxy/0.log" Feb 14 15:39:46 crc kubenswrapper[4750]: I0214 15:39:46.651911 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-nq9nf_e65f04d6-c4d8-4999-8a87-be675256e775/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:46 crc kubenswrapper[4750]: I0214 15:39:46.839693 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8ed4f6ec-3953-48de-a051-2af04cdafeb4/nova-api-api/0.log" Feb 14 15:39:46 crc kubenswrapper[4750]: I0214 15:39:46.864099 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2b53bb3a-9e74-4713-aedc-254b6671326d/nova-metadata-log/0.log" Feb 14 15:39:47 crc kubenswrapper[4750]: I0214 15:39:47.198685 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_42db0a00-1aa6-4754-840c-f93a2b927858/mysql-bootstrap/0.log" Feb 14 15:39:47 crc kubenswrapper[4750]: I0214 15:39:47.315432 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2f7209e1-923f-4507-8103-2020a196059f/nova-scheduler-scheduler/0.log" Feb 14 15:39:47 crc kubenswrapper[4750]: I0214 15:39:47.387005 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_42db0a00-1aa6-4754-840c-f93a2b927858/mysql-bootstrap/0.log" Feb 14 15:39:47 crc kubenswrapper[4750]: I0214 15:39:47.419571 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_42db0a00-1aa6-4754-840c-f93a2b927858/galera/0.log" Feb 14 15:39:47 crc kubenswrapper[4750]: I0214 15:39:47.633065 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e9776fb-2263-407b-93a2-3f27f9e0635f/mysql-bootstrap/0.log" Feb 14 15:39:47 crc kubenswrapper[4750]: I0214 15:39:47.808668 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e9776fb-2263-407b-93a2-3f27f9e0635f/galera/0.log" Feb 14 15:39:47 crc kubenswrapper[4750]: I0214 15:39:47.826215 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e9776fb-2263-407b-93a2-3f27f9e0635f/mysql-bootstrap/0.log" Feb 14 15:39:47 crc kubenswrapper[4750]: I0214 15:39:47.976513 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5d9cc9c1-5726-4507-bc40-a27f3aee83c4/openstackclient/0.log" Feb 14 15:39:48 crc kubenswrapper[4750]: I0214 15:39:48.176632 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4dn6h_761260d8-59af-48eb-bb26-aa7523be2d9d/ovn-controller/0.log" Feb 14 15:39:48 crc kubenswrapper[4750]: I0214 15:39:48.365371 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6h75r_a7797e0a-0f7c-42a0-bbe1-1c9f525eea52/openstack-network-exporter/0.log" Feb 14 15:39:48 crc kubenswrapper[4750]: I0214 15:39:48.478823 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bpd75_bc6ad499-faf9-47ce-8df2-57c77fb7e2b5/ovsdb-server-init/0.log" Feb 14 15:39:48 crc kubenswrapper[4750]: I0214 15:39:48.652850 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bpd75_bc6ad499-faf9-47ce-8df2-57c77fb7e2b5/ovsdb-server-init/0.log" Feb 14 15:39:48 crc kubenswrapper[4750]: I0214 15:39:48.676709 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bpd75_bc6ad499-faf9-47ce-8df2-57c77fb7e2b5/ovs-vswitchd/0.log" Feb 14 15:39:48 crc kubenswrapper[4750]: I0214 15:39:48.684862 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bpd75_bc6ad499-faf9-47ce-8df2-57c77fb7e2b5/ovsdb-server/0.log" Feb 14 15:39:48 crc kubenswrapper[4750]: I0214 15:39:48.941981 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cdx5g_2d98d1d6-d8c3-4b5c-b848-afceef7706f4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:49 crc kubenswrapper[4750]: I0214 15:39:49.069849 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_12a3b14e-20a2-4845-af37-ae9e7a6ebbc7/openstack-network-exporter/0.log" Feb 14 15:39:49 crc kubenswrapper[4750]: I0214 15:39:49.238531 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_12a3b14e-20a2-4845-af37-ae9e7a6ebbc7/ovn-northd/0.log" Feb 14 15:39:49 crc kubenswrapper[4750]: I0214 15:39:49.279057 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2b53bb3a-9e74-4713-aedc-254b6671326d/nova-metadata-metadata/0.log" Feb 14 15:39:49 crc kubenswrapper[4750]: I0214 15:39:49.337668 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cd41b510-5787-4c7e-9e0b-22301cd49f54/openstack-network-exporter/0.log" Feb 14 15:39:49 crc kubenswrapper[4750]: I0214 15:39:49.493698 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_27afbd74-b285-4efa-bd3f-33cc3c46363d/openstack-network-exporter/0.log" Feb 14 15:39:49 crc kubenswrapper[4750]: I0214 15:39:49.534867 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cd41b510-5787-4c7e-9e0b-22301cd49f54/ovsdbserver-nb/0.log" Feb 14 15:39:49 crc kubenswrapper[4750]: I0214 15:39:49.660741 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_27afbd74-b285-4efa-bd3f-33cc3c46363d/ovsdbserver-sb/0.log" Feb 14 15:39:49 crc kubenswrapper[4750]: I0214 15:39:49.892066 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5474bc9d4d-7h6tg_c6e7913f-cd87-4593-9345-e10614cac99b/placement-api/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.002004 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5474bc9d4d-7h6tg_c6e7913f-cd87-4593-9345-e10614cac99b/placement-log/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.087452 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/init-config-reloader/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.327493 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/init-config-reloader/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.328156 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/config-reloader/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.378362 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/thanos-sidecar/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.408223 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7cdc1f12-6f04-4860-9536-32178d28e2b7/prometheus/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.582654 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0191954f-3b7c-4102-9784-f775fa6e08f2/setup-container/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.772384 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0191954f-3b7c-4102-9784-f775fa6e08f2/rabbitmq/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.781095 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0191954f-3b7c-4102-9784-f775fa6e08f2/setup-container/0.log" Feb 14 15:39:50 crc kubenswrapper[4750]: I0214 15:39:50.847205 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cf78a33e-a78a-4048-afa0-af9c27e4425d/setup-container/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.047353 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cf78a33e-a78a-4048-afa0-af9c27e4425d/setup-container/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.179263 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cf78a33e-a78a-4048-afa0-af9c27e4425d/rabbitmq/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.190445 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3f096636-5e6f-428e-8a30-9433d6ac312c/setup-container/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.423726 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3f096636-5e6f-428e-8a30-9433d6ac312c/rabbitmq/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.440756 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3f096636-5e6f-428e-8a30-9433d6ac312c/setup-container/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.442081 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_38d75bde-7432-41e0-860c-b2d7219e518a/setup-container/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.638678 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_38d75bde-7432-41e0-860c-b2d7219e518a/setup-container/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.675824 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-726m4_2636faba-c74f-47a7-8a7c-eb14094ab50b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.814011 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_38d75bde-7432-41e0-860c-b2d7219e518a/rabbitmq/0.log" Feb 14 15:39:51 crc kubenswrapper[4750]: I0214 15:39:51.874886 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xtjr7_17baaf13-3126-48d1-a32e-522bf2bf43ff/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:52 crc kubenswrapper[4750]: I0214 15:39:52.077781 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7w2dl_579b6931-42c7-4a8f-9045-b9b993aa3fbd/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:52 crc kubenswrapper[4750]: I0214 15:39:52.134003 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-g4gfd_6ca25e17-509f-40d0-94e1-83db6398669c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:52 crc kubenswrapper[4750]: I0214 15:39:52.294123 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8m7xx_483eea7a-81f0-4f0c-92d8-dc0d3f713f10/ssh-known-hosts-edpm-deployment/0.log" Feb 14 15:39:52 crc kubenswrapper[4750]: I0214 15:39:52.599125 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b6899f7-wlrpr_1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c/proxy-server/0.log" Feb 14 15:39:52 crc kubenswrapper[4750]: I0214 15:39:52.625384 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-p2fnm_e6aba344-d824-42da-996e-733b7480a2eb/swift-ring-rebalance/0.log" Feb 14 15:39:52 crc kubenswrapper[4750]: I0214 15:39:52.723141 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b6899f7-wlrpr_1ac425a1-9c7f-4b06-8fa8-2a8d26d1463c/proxy-httpd/0.log" Feb 14 15:39:52 crc kubenswrapper[4750]: I0214 15:39:52.911775 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/account-reaper/0.log" Feb 14 15:39:52 crc kubenswrapper[4750]: I0214 15:39:52.928023 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/account-auditor/0.log" Feb 14 15:39:52 crc kubenswrapper[4750]: I0214 15:39:52.979689 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/account-replicator/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.098172 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/container-auditor/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.115644 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/account-server/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.247448 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/container-server/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.247903 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/container-replicator/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.388052 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-auditor/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.396035 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/container-updater/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.510355 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-expirer/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.563153 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-replicator/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.612377 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-server/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.665032 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/object-updater/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.706097 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/rsync/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.768495 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e623022c-0cda-4463-b5e1-3157a1f8c1c1/swift-recon-cron/0.log" Feb 14 15:39:53 crc kubenswrapper[4750]: I0214 15:39:53.988229 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-58rdb_66d16d12-f651-4f21-9160-e22496e7e969/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:54 crc kubenswrapper[4750]: I0214 15:39:54.036108 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-8jjrb_b5799dfb-5d7b-40e0-9187-056a19186b75/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:54 crc kubenswrapper[4750]: I0214 15:39:54.253874 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6fb21a9f-5cfd-4a7a-a85d-b77bfd67edc5/test-operator-logs-container/0.log" Feb 14 15:39:54 crc kubenswrapper[4750]: I0214 15:39:54.471748 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-cmz2t_8e6d20f9-b240-426a-8769-6f07bf3f75d4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 15:39:54 crc kubenswrapper[4750]: I0214 15:39:54.999172 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_e4d753a9-5bca-4940-9aa9-72a57f4f32a2/tempest-tests-tempest-tests-runner/0.log" Feb 14 15:39:58 crc kubenswrapper[4750]: I0214 15:39:58.762884 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:39:58 crc kubenswrapper[4750]: E0214 15:39:58.763679 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:39:59 crc kubenswrapper[4750]: I0214 15:39:59.509820 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_2136d6a8-25e9-4eff-946e-bbc49dab0b04/memcached/0.log" Feb 14 15:40:12 crc kubenswrapper[4750]: I0214 15:40:12.741719 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:40:12 crc kubenswrapper[4750]: E0214 15:40:12.742494 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:40:21 crc kubenswrapper[4750]: I0214 15:40:21.272984 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/util/0.log" Feb 14 15:40:21 crc kubenswrapper[4750]: I0214 15:40:21.441159 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/util/0.log" Feb 14 15:40:21 crc kubenswrapper[4750]: I0214 15:40:21.464352 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/pull/0.log" Feb 14 15:40:21 crc kubenswrapper[4750]: I0214 15:40:21.492788 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/pull/0.log" Feb 14 15:40:21 crc kubenswrapper[4750]: I0214 15:40:21.679537 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/util/0.log" Feb 14 15:40:21 crc kubenswrapper[4750]: I0214 15:40:21.697253 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/extract/0.log" Feb 14 15:40:21 crc kubenswrapper[4750]: I0214 15:40:21.701858 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_81269f9ae6f864456e98ddeb66b107d051e29bf13346c59e93988621af6j294_4846bde8-e74e-40a2-b3d7-98176fd2552b/pull/0.log" Feb 14 15:40:22 crc kubenswrapper[4750]: I0214 15:40:22.171235 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-bmlpf_ff1e2ca9-56b1-4511-b59b-14256631d65f/manager/0.log" Feb 14 15:40:22 crc kubenswrapper[4750]: I0214 15:40:22.530031 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-sh6f9_b559262e-cbcd-486e-8602-ece46ff1ed14/manager/0.log" Feb 14 15:40:22 crc kubenswrapper[4750]: I0214 15:40:22.732750 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-7s6qb_27f7394c-167e-4dda-bd08-b2d2a49d5f13/manager/0.log" Feb 14 15:40:22 crc kubenswrapper[4750]: I0214 15:40:22.952271 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-wttvx_8fbd7079-b94a-4632-bfd7-5d550d6cbe1d/manager/0.log" Feb 14 15:40:23 crc kubenswrapper[4750]: I0214 15:40:23.426199 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-lgsxc_253c171a-c8f6-47d7-9490-a91d08ecd980/manager/0.log" Feb 14 15:40:23 crc kubenswrapper[4750]: I0214 15:40:23.757913 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-vjpnq_6e2542fa-3b0a-4a09-8d18-54037ebbbdf8/manager/0.log" Feb 14 15:40:24 crc kubenswrapper[4750]: I0214 15:40:24.001545 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-gnmwc_3e79ecca-328f-4049-945b-a506ba6d56f9/manager/0.log" Feb 14 15:40:24 crc kubenswrapper[4750]: I0214 15:40:24.268321 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-5mm9s_c04784fa-abc5-4c4c-b891-6d73db5a17e1/manager/0.log" Feb 14 15:40:24 crc kubenswrapper[4750]: I0214 15:40:24.512803 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-87m54_413e7f1a-bab4-46b9-b59c-7d3a7cfc1e54/manager/0.log" Feb 14 15:40:24 crc kubenswrapper[4750]: I0214 15:40:24.792435 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-nvfv2_ac57dc96-afbc-4c7c-bd3c-9b763974a1c9/manager/0.log" Feb 14 15:40:24 crc kubenswrapper[4750]: I0214 15:40:24.826076 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-92td7_b146fda3-a4a4-4ebd-9ac0-32016dac7650/manager/0.log" Feb 14 15:40:25 crc kubenswrapper[4750]: I0214 15:40:25.056611 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-dq8x7_782775e0-d41f-4e9d-b6e5-4640a473b64a/manager/0.log" Feb 14 15:40:25 crc kubenswrapper[4750]: I0214 15:40:25.238204 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cmlf4s_64bc51de-7c3f-406e-899b-cbf5339658ea/manager/0.log" Feb 14 15:40:25 crc kubenswrapper[4750]: I0214 15:40:25.669783 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7b948d557b-gvsr9_ec0b7c77-5944-4b0e-bbd1-af0e3e14da56/operator/0.log" Feb 14 15:40:25 crc kubenswrapper[4750]: I0214 15:40:25.868495 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tm7xh_0f00286b-008c-4863-b623-9789f8fa3b7a/registry-server/0.log" Feb 14 15:40:26 crc kubenswrapper[4750]: I0214 15:40:26.137603 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-nzqxf_585cf590-6dcc-49d2-a01f-9d6fa1612328/manager/0.log" Feb 14 15:40:26 crc kubenswrapper[4750]: I0214 15:40:26.436545 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-5dvwn_32e5c795-f58a-41c2-8b75-53ef0f77bef8/manager/0.log" Feb 14 15:40:26 crc kubenswrapper[4750]: I0214 15:40:26.682044 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xfnm8_bfeed48e-2ac8-4348-8c4c-0e239bd8c568/operator/0.log" Feb 14 15:40:26 crc kubenswrapper[4750]: I0214 15:40:26.741511 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:40:26 crc kubenswrapper[4750]: E0214 15:40:26.742031 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:40:26 crc kubenswrapper[4750]: I0214 15:40:26.923465 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-s7nsg_cfe0761e-63fc-480a-bfe7-c8c3e78d3785/manager/0.log" Feb 14 15:40:27 crc kubenswrapper[4750]: I0214 15:40:27.326536 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-zx56k_5ffa3c9f-1ef0-43df-b99c-3dd0c918f129/manager/0.log" Feb 14 15:40:27 crc kubenswrapper[4750]: I0214 15:40:27.645735 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6fdcfd45d9-rqdd9_8d3cff8a-de26-4c32-96b3-080e797d527f/manager/0.log" Feb 14 15:40:27 crc kubenswrapper[4750]: I0214 15:40:27.777743 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-xt5f9_34ecdca8-2927-432e-b770-c0c0d0b750e9/manager/0.log" Feb 14 15:40:27 crc kubenswrapper[4750]: I0214 15:40:27.985514 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bd569c557-twg4v_aba78e62-6759-445f-b8a9-9c8b36cf4a3a/manager/0.log" Feb 14 15:40:28 crc kubenswrapper[4750]: I0214 15:40:28.230566 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-dnj8w_60e32c9b-3598-476f-85d8-7cab15748de5/manager/0.log" Feb 14 15:40:34 crc kubenswrapper[4750]: I0214 15:40:34.891257 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-77cn7_4b1ad62d-48a6-4228-8de2-3710bd15b7f4/manager/0.log" Feb 14 15:40:40 crc kubenswrapper[4750]: I0214 15:40:40.742983 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:40:40 crc kubenswrapper[4750]: E0214 15:40:40.745366 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:40:50 crc kubenswrapper[4750]: I0214 15:40:50.133493 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rv4bx_9d1b8a23-779e-49fe-8e23-d9c4a53117b0/control-plane-machine-set-operator/0.log" Feb 14 15:40:50 crc kubenswrapper[4750]: I0214 15:40:50.319221 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kzvww_19eb8e0f-83bc-40d2-a994-ba669171915e/kube-rbac-proxy/0.log" Feb 14 15:40:50 crc kubenswrapper[4750]: I0214 15:40:50.350482 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kzvww_19eb8e0f-83bc-40d2-a994-ba669171915e/machine-api-operator/0.log" Feb 14 15:40:54 crc kubenswrapper[4750]: I0214 15:40:54.743557 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:40:54 crc kubenswrapper[4750]: E0214 15:40:54.744786 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:41:03 crc kubenswrapper[4750]: I0214 15:41:03.601494 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gtl8p_1fb05431-9eaa-4243-8a3f-fdc9699e102a/cert-manager-controller/0.log" Feb 14 15:41:03 crc kubenswrapper[4750]: I0214 15:41:03.846664 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lns8l_5dcd18b1-7ced-4567-a937-01a9c6c8b66f/cert-manager-cainjector/0.log" Feb 14 15:41:03 crc kubenswrapper[4750]: I0214 15:41:03.934814 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-jttq6_dead2cb0-8f6c-40c2-b4a5-a1eb2a506890/cert-manager-webhook/0.log" Feb 14 15:41:05 crc kubenswrapper[4750]: I0214 15:41:05.742183 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:41:05 crc kubenswrapper[4750]: E0214 15:41:05.742979 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:41:17 crc kubenswrapper[4750]: I0214 15:41:17.451254 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-vwcr5_8f83c9d2-f263-4114-b375-f18f32d91231/nmstate-console-plugin/0.log" Feb 14 15:41:17 crc kubenswrapper[4750]: I0214 15:41:17.615079 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7fb7g_d67741a3-cda0-41c4-ad20-dac649d22a2d/nmstate-handler/0.log" Feb 14 15:41:17 crc kubenswrapper[4750]: I0214 15:41:17.671374 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-v5j7m_85849127-24fe-4e0b-9c43-c0d80d007c66/kube-rbac-proxy/0.log" Feb 14 15:41:17 crc kubenswrapper[4750]: I0214 15:41:17.676895 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-v5j7m_85849127-24fe-4e0b-9c43-c0d80d007c66/nmstate-metrics/0.log" Feb 14 15:41:17 crc kubenswrapper[4750]: I0214 15:41:17.840212 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-zrfxn_32ffd70f-c819-435f-bb5f-a3a705e4052e/nmstate-operator/0.log" Feb 14 15:41:17 crc kubenswrapper[4750]: I0214 15:41:17.907385 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-nsxh6_a1ca23af-ddcc-4041-8e76-7220d4e32212/nmstate-webhook/0.log" Feb 14 15:41:19 crc kubenswrapper[4750]: I0214 15:41:19.741915 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:41:19 crc kubenswrapper[4750]: E0214 15:41:19.743852 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:41:31 crc kubenswrapper[4750]: I0214 15:41:31.217926 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-846996f79f-rwhb4_b4c5732e-22b3-490f-a53b-d09c07a0a36f/kube-rbac-proxy/0.log" Feb 14 15:41:31 crc kubenswrapper[4750]: I0214 15:41:31.280045 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-846996f79f-rwhb4_b4c5732e-22b3-490f-a53b-d09c07a0a36f/manager/0.log" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.220433 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x64cq"] Feb 14 15:41:32 crc kubenswrapper[4750]: E0214 15:41:32.221163 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6381bdfb-fa44-48b1-85db-08101faee67d" containerName="registry-server" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.221183 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6381bdfb-fa44-48b1-85db-08101faee67d" containerName="registry-server" Feb 14 15:41:32 crc kubenswrapper[4750]: E0214 15:41:32.221208 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6381bdfb-fa44-48b1-85db-08101faee67d" containerName="extract-content" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.221214 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6381bdfb-fa44-48b1-85db-08101faee67d" containerName="extract-content" Feb 14 15:41:32 crc kubenswrapper[4750]: E0214 15:41:32.221223 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6381bdfb-fa44-48b1-85db-08101faee67d" containerName="extract-utilities" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.221228 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="6381bdfb-fa44-48b1-85db-08101faee67d" containerName="extract-utilities" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.221478 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="6381bdfb-fa44-48b1-85db-08101faee67d" containerName="registry-server" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.227233 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.238103 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x64cq"] Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.346146 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-catalog-content\") pod \"redhat-operators-x64cq\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.346222 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jvj\" (UniqueName: \"kubernetes.io/projected/27e506ea-2303-487b-880b-176b8e5c6b5b-kube-api-access-v7jvj\") pod \"redhat-operators-x64cq\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.346283 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-utilities\") pod \"redhat-operators-x64cq\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.448439 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7jvj\" (UniqueName: \"kubernetes.io/projected/27e506ea-2303-487b-880b-176b8e5c6b5b-kube-api-access-v7jvj\") pod \"redhat-operators-x64cq\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.448560 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-utilities\") pod \"redhat-operators-x64cq\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.448784 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-catalog-content\") pod \"redhat-operators-x64cq\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.449175 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-utilities\") pod \"redhat-operators-x64cq\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.449249 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-catalog-content\") pod \"redhat-operators-x64cq\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.476825 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7jvj\" (UniqueName: \"kubernetes.io/projected/27e506ea-2303-487b-880b-176b8e5c6b5b-kube-api-access-v7jvj\") pod \"redhat-operators-x64cq\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.558526 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:32 crc kubenswrapper[4750]: I0214 15:41:32.741774 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:41:32 crc kubenswrapper[4750]: E0214 15:41:32.742141 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:41:33 crc kubenswrapper[4750]: I0214 15:41:33.274487 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x64cq"] Feb 14 15:41:33 crc kubenswrapper[4750]: I0214 15:41:33.671768 4750 generic.go:334] "Generic (PLEG): container finished" podID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerID="7e48dd76bacf40f337ae9ff5dfb99067f931fc99417f598f887a0f466725101f" exitCode=0 Feb 14 15:41:33 crc kubenswrapper[4750]: I0214 15:41:33.671872 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x64cq" event={"ID":"27e506ea-2303-487b-880b-176b8e5c6b5b","Type":"ContainerDied","Data":"7e48dd76bacf40f337ae9ff5dfb99067f931fc99417f598f887a0f466725101f"} Feb 14 15:41:33 crc kubenswrapper[4750]: I0214 15:41:33.672194 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x64cq" event={"ID":"27e506ea-2303-487b-880b-176b8e5c6b5b","Type":"ContainerStarted","Data":"78630704bca3f94ebcf9a2e1d6717aeff576bdfe85cb4e730397f380f3f395c5"} Feb 14 15:41:34 crc kubenswrapper[4750]: I0214 15:41:34.687769 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x64cq" event={"ID":"27e506ea-2303-487b-880b-176b8e5c6b5b","Type":"ContainerStarted","Data":"5f75bacefb8108f0c115919302830c3c012eb40fddb63598cca049847116d6f2"} Feb 14 15:41:39 crc kubenswrapper[4750]: I0214 15:41:39.753815 4750 generic.go:334] "Generic (PLEG): container finished" podID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerID="5f75bacefb8108f0c115919302830c3c012eb40fddb63598cca049847116d6f2" exitCode=0 Feb 14 15:41:39 crc kubenswrapper[4750]: I0214 15:41:39.753853 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x64cq" event={"ID":"27e506ea-2303-487b-880b-176b8e5c6b5b","Type":"ContainerDied","Data":"5f75bacefb8108f0c115919302830c3c012eb40fddb63598cca049847116d6f2"} Feb 14 15:41:40 crc kubenswrapper[4750]: I0214 15:41:40.768998 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x64cq" event={"ID":"27e506ea-2303-487b-880b-176b8e5c6b5b","Type":"ContainerStarted","Data":"8a0004d1af99e89e2c6548fb655f39dc4df8e2b667e5fa102089103648c458f9"} Feb 14 15:41:40 crc kubenswrapper[4750]: I0214 15:41:40.793580 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x64cq" podStartSLOduration=2.272285186 podStartE2EDuration="8.7935614s" podCreationTimestamp="2026-02-14 15:41:32 +0000 UTC" firstStartedPulling="2026-02-14 15:41:33.673506531 +0000 UTC m=+6565.699496012" lastFinishedPulling="2026-02-14 15:41:40.194782725 +0000 UTC m=+6572.220772226" observedRunningTime="2026-02-14 15:41:40.787762335 +0000 UTC m=+6572.813751816" watchObservedRunningTime="2026-02-14 15:41:40.7935614 +0000 UTC m=+6572.819550881" Feb 14 15:41:42 crc kubenswrapper[4750]: I0214 15:41:42.559784 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:42 crc kubenswrapper[4750]: I0214 15:41:42.560087 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:41:43 crc kubenswrapper[4750]: I0214 15:41:43.612339 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x64cq" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="registry-server" probeResult="failure" output=< Feb 14 15:41:43 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:41:43 crc kubenswrapper[4750]: > Feb 14 15:41:44 crc kubenswrapper[4750]: I0214 15:41:44.940212 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6hfcq_874d7068-1761-42b7-8e65-5ea7669259f4/prometheus-operator/0.log" Feb 14 15:41:45 crc kubenswrapper[4750]: I0214 15:41:45.168652 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_75d8f617-5e52-472a-922a-88563b49d041/prometheus-operator-admission-webhook/0.log" Feb 14 15:41:45 crc kubenswrapper[4750]: I0214 15:41:45.227898 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_f776de07-4c75-4295-839f-6e10713be326/prometheus-operator-admission-webhook/0.log" Feb 14 15:41:45 crc kubenswrapper[4750]: I0214 15:41:45.421037 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-xn27s_1852ee04-b190-41b5-8261-d481c237b27d/operator/0.log" Feb 14 15:41:45 crc kubenswrapper[4750]: I0214 15:41:45.468649 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-pdpkt_95744c6b-6feb-4934-b1b6-6d73a3c17ad0/observability-ui-dashboards/0.log" Feb 14 15:41:45 crc kubenswrapper[4750]: I0214 15:41:45.638614 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-vl469_8f6b191a-aa81-4827-80eb-5bbbdf54eeba/perses-operator/0.log" Feb 14 15:41:47 crc kubenswrapper[4750]: I0214 15:41:47.742786 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:41:47 crc kubenswrapper[4750]: E0214 15:41:47.743486 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:41:53 crc kubenswrapper[4750]: I0214 15:41:53.615267 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x64cq" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="registry-server" probeResult="failure" output=< Feb 14 15:41:53 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:41:53 crc kubenswrapper[4750]: > Feb 14 15:42:00 crc kubenswrapper[4750]: I0214 15:42:00.822062 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-8x65s_f13003c2-701e-4806-abb5-23f0e95cf8c2/cluster-logging-operator/0.log" Feb 14 15:42:00 crc kubenswrapper[4750]: I0214 15:42:00.987161 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-s8ptt_a6b3f125-069d-4e80-92bc-3e4c32659e7a/collector/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.031030 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_bfa5f022-80f6-4ae5-8734-d6b9b9925490/loki-compactor/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.208656 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-zgn87_75cfd9e5-1c5d-4f8c-b736-c7f4d3415033/loki-distributor/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.274524 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595d4c559-z8vwl_abdb8ead-5282-4e10-a261-b90509d22bbd/gateway/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.384082 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595d4c559-z8vwl_abdb8ead-5282-4e10-a261-b90509d22bbd/opa/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.480806 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595d4c559-zsk8f_11746f0c-702d-4684-97e8-46c8b3f2d75a/gateway/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.482564 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595d4c559-zsk8f_11746f0c-702d-4684-97e8-46c8b3f2d75a/opa/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.626010 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_7d57e203-6e0c-4079-ba36-ffb3c7e69913/loki-index-gateway/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.746579 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_d4d23b53-2885-4966-aa62-1e61fd2f2af6/loki-ingester/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.855603 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-hwkcr_158c19c7-53f4-4964-89df-ee7509251e08/loki-querier/0.log" Feb 14 15:42:01 crc kubenswrapper[4750]: I0214 15:42:01.945317 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-j8bzm_6bb3081d-4136-43e1-a9a9-9d9b5ce10809/loki-query-frontend/0.log" Feb 14 15:42:02 crc kubenswrapper[4750]: I0214 15:42:02.743733 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:42:02 crc kubenswrapper[4750]: E0214 15:42:02.744537 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:42:03 crc kubenswrapper[4750]: I0214 15:42:03.633300 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x64cq" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="registry-server" probeResult="failure" output=< Feb 14 15:42:03 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:42:03 crc kubenswrapper[4750]: > Feb 14 15:42:13 crc kubenswrapper[4750]: I0214 15:42:13.630642 4750 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x64cq" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="registry-server" probeResult="failure" output=< Feb 14 15:42:13 crc kubenswrapper[4750]: timeout: failed to connect service ":50051" within 1s Feb 14 15:42:13 crc kubenswrapper[4750]: > Feb 14 15:42:13 crc kubenswrapper[4750]: I0214 15:42:13.750530 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:42:13 crc kubenswrapper[4750]: E0214 15:42:13.751437 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:42:16 crc kubenswrapper[4750]: I0214 15:42:16.869489 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-v9m54_da5b2754-b4d8-46a1-ad93-926e2ae005eb/kube-rbac-proxy/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.030057 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-v9m54_da5b2754-b4d8-46a1-ad93-926e2ae005eb/controller/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.088184 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-frr-files/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.359928 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-reloader/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.371067 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-metrics/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.384399 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-frr-files/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.408689 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-reloader/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.585322 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-frr-files/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.590325 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-metrics/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.611133 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-reloader/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.619422 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-metrics/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.825547 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-reloader/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.846719 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-metrics/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.846881 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/cp-frr-files/0.log" Feb 14 15:42:17 crc kubenswrapper[4750]: I0214 15:42:17.858175 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/controller/0.log" Feb 14 15:42:18 crc kubenswrapper[4750]: I0214 15:42:18.051173 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/kube-rbac-proxy/0.log" Feb 14 15:42:18 crc kubenswrapper[4750]: I0214 15:42:18.058223 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/frr-metrics/0.log" Feb 14 15:42:18 crc kubenswrapper[4750]: I0214 15:42:18.094197 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/kube-rbac-proxy-frr/0.log" Feb 14 15:42:18 crc kubenswrapper[4750]: I0214 15:42:18.291917 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/reloader/0.log" Feb 14 15:42:18 crc kubenswrapper[4750]: I0214 15:42:18.367182 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-8ljxj_b2892e47-7716-4ebc-86ef-376d952f3546/frr-k8s-webhook-server/0.log" Feb 14 15:42:18 crc kubenswrapper[4750]: I0214 15:42:18.598323 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-86bdb8fc5c-ps8mb_863644f3-8ec4-4391-a74e-7fe2d8dc4b3c/manager/0.log" Feb 14 15:42:18 crc kubenswrapper[4750]: I0214 15:42:18.899452 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wbx4h_9f176239-5523-47f3-909c-e7c77b65acf5/kube-rbac-proxy/0.log" Feb 14 15:42:18 crc kubenswrapper[4750]: I0214 15:42:18.902548 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5967b4f7c5-67sfd_bbf10811-6f76-4024-83ce-7263f00af6bb/webhook-server/0.log" Feb 14 15:42:19 crc kubenswrapper[4750]: I0214 15:42:19.747796 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wbx4h_9f176239-5523-47f3-909c-e7c77b65acf5/speaker/0.log" Feb 14 15:42:20 crc kubenswrapper[4750]: I0214 15:42:20.023829 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mm2g_9fe99c26-de80-4b40-805c-95d804f86cf7/frr/0.log" Feb 14 15:42:22 crc kubenswrapper[4750]: I0214 15:42:22.630246 4750 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:42:22 crc kubenswrapper[4750]: I0214 15:42:22.696593 4750 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:42:22 crc kubenswrapper[4750]: I0214 15:42:22.876793 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x64cq"] Feb 14 15:42:24 crc kubenswrapper[4750]: I0214 15:42:24.248572 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x64cq" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="registry-server" containerID="cri-o://8a0004d1af99e89e2c6548fb655f39dc4df8e2b667e5fa102089103648c458f9" gracePeriod=2 Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.262084 4750 generic.go:334] "Generic (PLEG): container finished" podID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerID="8a0004d1af99e89e2c6548fb655f39dc4df8e2b667e5fa102089103648c458f9" exitCode=0 Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.262238 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x64cq" event={"ID":"27e506ea-2303-487b-880b-176b8e5c6b5b","Type":"ContainerDied","Data":"8a0004d1af99e89e2c6548fb655f39dc4df8e2b667e5fa102089103648c458f9"} Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.262445 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x64cq" event={"ID":"27e506ea-2303-487b-880b-176b8e5c6b5b","Type":"ContainerDied","Data":"78630704bca3f94ebcf9a2e1d6717aeff576bdfe85cb4e730397f380f3f395c5"} Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.262465 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78630704bca3f94ebcf9a2e1d6717aeff576bdfe85cb4e730397f380f3f395c5" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.369419 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.488714 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-catalog-content\") pod \"27e506ea-2303-487b-880b-176b8e5c6b5b\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.489136 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7jvj\" (UniqueName: \"kubernetes.io/projected/27e506ea-2303-487b-880b-176b8e5c6b5b-kube-api-access-v7jvj\") pod \"27e506ea-2303-487b-880b-176b8e5c6b5b\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.489212 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-utilities\") pod \"27e506ea-2303-487b-880b-176b8e5c6b5b\" (UID: \"27e506ea-2303-487b-880b-176b8e5c6b5b\") " Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.489715 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-utilities" (OuterVolumeSpecName: "utilities") pod "27e506ea-2303-487b-880b-176b8e5c6b5b" (UID: "27e506ea-2303-487b-880b-176b8e5c6b5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.489837 4750 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.497853 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e506ea-2303-487b-880b-176b8e5c6b5b-kube-api-access-v7jvj" (OuterVolumeSpecName: "kube-api-access-v7jvj") pod "27e506ea-2303-487b-880b-176b8e5c6b5b" (UID: "27e506ea-2303-487b-880b-176b8e5c6b5b"). InnerVolumeSpecName "kube-api-access-v7jvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.594258 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7jvj\" (UniqueName: \"kubernetes.io/projected/27e506ea-2303-487b-880b-176b8e5c6b5b-kube-api-access-v7jvj\") on node \"crc\" DevicePath \"\"" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.620426 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27e506ea-2303-487b-880b-176b8e5c6b5b" (UID: "27e506ea-2303-487b-880b-176b8e5c6b5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.695997 4750 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27e506ea-2303-487b-880b-176b8e5c6b5b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:25.742699 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:42:26 crc kubenswrapper[4750]: E0214 15:42:25.743085 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:26.283381 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x64cq" Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:26.334181 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x64cq"] Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:26.350184 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x64cq"] Feb 14 15:42:26 crc kubenswrapper[4750]: I0214 15:42:26.759504 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" path="/var/lib/kubelet/pods/27e506ea-2303-487b-880b-176b8e5c6b5b/volumes" Feb 14 15:42:33 crc kubenswrapper[4750]: I0214 15:42:33.804169 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/util/0.log" Feb 14 15:42:33 crc kubenswrapper[4750]: I0214 15:42:33.933075 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/util/0.log" Feb 14 15:42:33 crc kubenswrapper[4750]: I0214 15:42:33.934848 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/pull/0.log" Feb 14 15:42:33 crc kubenswrapper[4750]: I0214 15:42:33.994158 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/pull/0.log" Feb 14 15:42:34 crc kubenswrapper[4750]: I0214 15:42:34.173530 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/util/0.log" Feb 14 15:42:34 crc kubenswrapper[4750]: I0214 15:42:34.178363 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/pull/0.log" Feb 14 15:42:34 crc kubenswrapper[4750]: I0214 15:42:34.218577 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19b6wzv_c8f11fe7-74bb-4283-add4-8ca1fb45a3ae/extract/0.log" Feb 14 15:42:34 crc kubenswrapper[4750]: I0214 15:42:34.356521 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/util/0.log" Feb 14 15:42:34 crc kubenswrapper[4750]: I0214 15:42:34.554251 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/pull/0.log" Feb 14 15:42:34 crc kubenswrapper[4750]: I0214 15:42:34.564021 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/util/0.log" Feb 14 15:42:34 crc kubenswrapper[4750]: I0214 15:42:34.614639 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/pull/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.014708 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/util/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.057861 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/pull/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.090617 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmmpz_b9ad9eda-9ae2-4549-b367-4ae1a795e809/extract/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.236830 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/util/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.412478 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/util/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.463795 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/pull/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.471869 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/pull/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.642003 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/extract/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.650634 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/pull/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.682090 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gjh97_2fbefb7a-a8d6-467e-9fd9-fdd8fb34fc1c/util/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.800086 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-utilities/0.log" Feb 14 15:42:35 crc kubenswrapper[4750]: I0214 15:42:35.995666 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-content/0.log" Feb 14 15:42:36 crc kubenswrapper[4750]: I0214 15:42:36.013825 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-utilities/0.log" Feb 14 15:42:36 crc kubenswrapper[4750]: I0214 15:42:36.047157 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-content/0.log" Feb 14 15:42:36 crc kubenswrapper[4750]: I0214 15:42:36.229017 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-utilities/0.log" Feb 14 15:42:36 crc kubenswrapper[4750]: I0214 15:42:36.234110 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/extract-content/0.log" Feb 14 15:42:36 crc kubenswrapper[4750]: I0214 15:42:36.490899 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-utilities/0.log" Feb 14 15:42:36 crc kubenswrapper[4750]: I0214 15:42:36.727258 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-content/0.log" Feb 14 15:42:36 crc kubenswrapper[4750]: I0214 15:42:36.732669 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-content/0.log" Feb 14 15:42:36 crc kubenswrapper[4750]: I0214 15:42:36.793726 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-utilities/0.log" Feb 14 15:42:37 crc kubenswrapper[4750]: I0214 15:42:37.164334 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-utilities/0.log" Feb 14 15:42:37 crc kubenswrapper[4750]: I0214 15:42:37.267592 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/extract-content/0.log" Feb 14 15:42:37 crc kubenswrapper[4750]: I0214 15:42:37.466312 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q58l2_6dd3db5f-80db-4c43-acb5-445300c95649/registry-server/0.log" Feb 14 15:42:37 crc kubenswrapper[4750]: I0214 15:42:37.512861 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/util/0.log" Feb 14 15:42:37 crc kubenswrapper[4750]: I0214 15:42:37.837356 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/util/0.log" Feb 14 15:42:37 crc kubenswrapper[4750]: I0214 15:42:37.839477 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/pull/0.log" Feb 14 15:42:37 crc kubenswrapper[4750]: I0214 15:42:37.953970 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/pull/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.062061 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/pull/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.096425 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/util/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.229533 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089898h4xp_9bd1dff7-6b82-41bd-959a-8bc13f6c5a77/extract/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.277229 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d89lk_0e16d6e0-2460-4802-aef3-14c53a22c5f8/registry-server/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.278485 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/util/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.496228 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/util/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.528153 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/pull/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.558987 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/pull/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.722604 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/util/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.730703 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/pull/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.741092 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca9rfjk_f97a5e13-6e27-4821-aadd-826dcebbfd6c/extract/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.797826 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tdtms_b634ea23-ca70-446c-8a62-0910256d9025/marketplace-operator/0.log" Feb 14 15:42:38 crc kubenswrapper[4750]: I0214 15:42:38.920731 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-utilities/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.108612 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-utilities/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.148098 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-content/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.167224 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-content/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.396375 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-utilities/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.420551 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/extract-content/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.466040 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-utilities/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.559504 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8tt7_ecca3833-89b7-4533-a365-160f8af73d1a/registry-server/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.660775 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-content/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.666899 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-content/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.688849 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-utilities/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.896641 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-content/0.log" Feb 14 15:42:39 crc kubenswrapper[4750]: I0214 15:42:39.916623 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/extract-utilities/0.log" Feb 14 15:42:40 crc kubenswrapper[4750]: I0214 15:42:40.653196 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7jn2x_d9078426-1b92-4b30-8529-9ad63d68bf73/registry-server/0.log" Feb 14 15:42:40 crc kubenswrapper[4750]: I0214 15:42:40.745952 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:42:40 crc kubenswrapper[4750]: E0214 15:42:40.746558 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:42:52 crc kubenswrapper[4750]: I0214 15:42:52.742232 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:42:52 crc kubenswrapper[4750]: E0214 15:42:52.743195 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:42:53 crc kubenswrapper[4750]: I0214 15:42:53.981994 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6hfcq_874d7068-1761-42b7-8e65-5ea7669259f4/prometheus-operator/0.log" Feb 14 15:42:54 crc kubenswrapper[4750]: I0214 15:42:54.020442 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-578f5bf547-l6jcz_f776de07-4c75-4295-839f-6e10713be326/prometheus-operator-admission-webhook/0.log" Feb 14 15:42:54 crc kubenswrapper[4750]: I0214 15:42:54.028927 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-578f5bf547-45zb2_75d8f617-5e52-472a-922a-88563b49d041/prometheus-operator-admission-webhook/0.log" Feb 14 15:42:54 crc kubenswrapper[4750]: I0214 15:42:54.164833 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-xn27s_1852ee04-b190-41b5-8261-d481c237b27d/operator/0.log" Feb 14 15:42:54 crc kubenswrapper[4750]: I0214 15:42:54.186256 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-pdpkt_95744c6b-6feb-4934-b1b6-6d73a3c17ad0/observability-ui-dashboards/0.log" Feb 14 15:42:54 crc kubenswrapper[4750]: I0214 15:42:54.222827 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-vl469_8f6b191a-aa81-4827-80eb-5bbbdf54eeba/perses-operator/0.log" Feb 14 15:43:03 crc kubenswrapper[4750]: I0214 15:43:03.741742 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:43:03 crc kubenswrapper[4750]: E0214 15:43:03.742731 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:43:08 crc kubenswrapper[4750]: I0214 15:43:08.506001 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-846996f79f-rwhb4_b4c5732e-22b3-490f-a53b-d09c07a0a36f/kube-rbac-proxy/0.log" Feb 14 15:43:08 crc kubenswrapper[4750]: I0214 15:43:08.523649 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-846996f79f-rwhb4_b4c5732e-22b3-490f-a53b-d09c07a0a36f/manager/0.log" Feb 14 15:43:17 crc kubenswrapper[4750]: I0214 15:43:17.742471 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:43:17 crc kubenswrapper[4750]: E0214 15:43:17.743199 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:43:29 crc kubenswrapper[4750]: I0214 15:43:29.742582 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:43:29 crc kubenswrapper[4750]: E0214 15:43:29.743284 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:43:33 crc kubenswrapper[4750]: E0214 15:43:33.053538 4750 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:51532->38.102.83.36:35453: write tcp 38.102.83.36:51532->38.102.83.36:35453: write: broken pipe Feb 14 15:43:41 crc kubenswrapper[4750]: I0214 15:43:41.742598 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:43:41 crc kubenswrapper[4750]: E0214 15:43:41.743322 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:43:55 crc kubenswrapper[4750]: I0214 15:43:55.742610 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:43:55 crc kubenswrapper[4750]: E0214 15:43:55.744050 4750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j5rld_openshift-machine-config-operator(581740c6-1f28-4471-8131-5d5042cc59f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" Feb 14 15:44:08 crc kubenswrapper[4750]: I0214 15:44:08.756347 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:44:09 crc kubenswrapper[4750]: I0214 15:44:09.754793 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"7f8c2ae337c8b2107c86e9c9d956b672e0995b87603f44558e49e8e7af9477a5"} Feb 14 15:44:30 crc kubenswrapper[4750]: I0214 15:44:30.711688 4750 scope.go:117] "RemoveContainer" containerID="a9769d6b82ca0dd2b672bd18a3c37845b9a7aa415c1dd3a4b2e1d1d680c0dfdb" Feb 14 15:44:30 crc kubenswrapper[4750]: I0214 15:44:30.749303 4750 scope.go:117] "RemoveContainer" containerID="3d1a9d4bd58677dc41b615f4036e811b45ee44b8b0bcf0d67665683afcba17bb" Feb 14 15:44:30 crc kubenswrapper[4750]: I0214 15:44:30.807243 4750 scope.go:117] "RemoveContainer" containerID="2d3fa2eda790109d02bd1bcc93c0d7b7e33be148e7dc722067240d58cb276527" Feb 14 15:44:30 crc kubenswrapper[4750]: I0214 15:44:30.863039 4750 scope.go:117] "RemoveContainer" containerID="ec064589a945dbe9f167917179785c6aec8a86921cc1e719e9d4536f4f2196d7" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.251497 4750 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c"] Feb 14 15:45:00 crc kubenswrapper[4750]: E0214 15:45:00.254313 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="extract-utilities" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.254372 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="extract-utilities" Feb 14 15:45:00 crc kubenswrapper[4750]: E0214 15:45:00.254427 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="registry-server" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.254444 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="registry-server" Feb 14 15:45:00 crc kubenswrapper[4750]: E0214 15:45:00.254467 4750 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="extract-content" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.254478 4750 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="extract-content" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.254849 4750 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e506ea-2303-487b-880b-176b8e5c6b5b" containerName="registry-server" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.256497 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.267842 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c"] Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.283434 4750 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.283434 4750 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.357314 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89561c1b-1faa-4581-9ade-5b4cc4485274-secret-volume\") pod \"collect-profiles-29518065-vdl4c\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.357458 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89561c1b-1faa-4581-9ade-5b4cc4485274-config-volume\") pod \"collect-profiles-29518065-vdl4c\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.357488 4750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbd9c\" (UniqueName: \"kubernetes.io/projected/89561c1b-1faa-4581-9ade-5b4cc4485274-kube-api-access-lbd9c\") pod \"collect-profiles-29518065-vdl4c\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.459732 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89561c1b-1faa-4581-9ade-5b4cc4485274-config-volume\") pod \"collect-profiles-29518065-vdl4c\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.459778 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbd9c\" (UniqueName: \"kubernetes.io/projected/89561c1b-1faa-4581-9ade-5b4cc4485274-kube-api-access-lbd9c\") pod \"collect-profiles-29518065-vdl4c\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.459926 4750 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89561c1b-1faa-4581-9ade-5b4cc4485274-secret-volume\") pod \"collect-profiles-29518065-vdl4c\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.461150 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89561c1b-1faa-4581-9ade-5b4cc4485274-config-volume\") pod \"collect-profiles-29518065-vdl4c\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.470364 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89561c1b-1faa-4581-9ade-5b4cc4485274-secret-volume\") pod \"collect-profiles-29518065-vdl4c\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.477570 4750 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbd9c\" (UniqueName: \"kubernetes.io/projected/89561c1b-1faa-4581-9ade-5b4cc4485274-kube-api-access-lbd9c\") pod \"collect-profiles-29518065-vdl4c\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:00 crc kubenswrapper[4750]: I0214 15:45:00.596217 4750 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:01 crc kubenswrapper[4750]: W0214 15:45:01.483946 4750 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89561c1b_1faa_4581_9ade_5b4cc4485274.slice/crio-35ada692fd867cf090d48229b6df1a61721d904db26d3233ac0ca9a4901cc668 WatchSource:0}: Error finding container 35ada692fd867cf090d48229b6df1a61721d904db26d3233ac0ca9a4901cc668: Status 404 returned error can't find the container with id 35ada692fd867cf090d48229b6df1a61721d904db26d3233ac0ca9a4901cc668 Feb 14 15:45:01 crc kubenswrapper[4750]: I0214 15:45:01.486996 4750 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c"] Feb 14 15:45:02 crc kubenswrapper[4750]: I0214 15:45:02.442722 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" event={"ID":"89561c1b-1faa-4581-9ade-5b4cc4485274","Type":"ContainerStarted","Data":"5d10cfadf96fbd9689da19f94db99ee13c82cbf7e45f4933c3ffcbd048412263"} Feb 14 15:45:02 crc kubenswrapper[4750]: I0214 15:45:02.443067 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" event={"ID":"89561c1b-1faa-4581-9ade-5b4cc4485274","Type":"ContainerStarted","Data":"35ada692fd867cf090d48229b6df1a61721d904db26d3233ac0ca9a4901cc668"} Feb 14 15:45:02 crc kubenswrapper[4750]: I0214 15:45:02.467616 4750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" podStartSLOduration=2.467595659 podStartE2EDuration="2.467595659s" podCreationTimestamp="2026-02-14 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 15:45:02.463417361 +0000 UTC m=+6774.489406882" watchObservedRunningTime="2026-02-14 15:45:02.467595659 +0000 UTC m=+6774.493585150" Feb 14 15:45:03 crc kubenswrapper[4750]: I0214 15:45:03.458923 4750 generic.go:334] "Generic (PLEG): container finished" podID="89561c1b-1faa-4581-9ade-5b4cc4485274" containerID="5d10cfadf96fbd9689da19f94db99ee13c82cbf7e45f4933c3ffcbd048412263" exitCode=0 Feb 14 15:45:03 crc kubenswrapper[4750]: I0214 15:45:03.459045 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" event={"ID":"89561c1b-1faa-4581-9ade-5b4cc4485274","Type":"ContainerDied","Data":"5d10cfadf96fbd9689da19f94db99ee13c82cbf7e45f4933c3ffcbd048412263"} Feb 14 15:45:04 crc kubenswrapper[4750]: I0214 15:45:04.910340 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:04 crc kubenswrapper[4750]: I0214 15:45:04.992431 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbd9c\" (UniqueName: \"kubernetes.io/projected/89561c1b-1faa-4581-9ade-5b4cc4485274-kube-api-access-lbd9c\") pod \"89561c1b-1faa-4581-9ade-5b4cc4485274\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " Feb 14 15:45:04 crc kubenswrapper[4750]: I0214 15:45:04.992469 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89561c1b-1faa-4581-9ade-5b4cc4485274-secret-volume\") pod \"89561c1b-1faa-4581-9ade-5b4cc4485274\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " Feb 14 15:45:04 crc kubenswrapper[4750]: I0214 15:45:04.992637 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89561c1b-1faa-4581-9ade-5b4cc4485274-config-volume\") pod \"89561c1b-1faa-4581-9ade-5b4cc4485274\" (UID: \"89561c1b-1faa-4581-9ade-5b4cc4485274\") " Feb 14 15:45:04 crc kubenswrapper[4750]: I0214 15:45:04.993879 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89561c1b-1faa-4581-9ade-5b4cc4485274-config-volume" (OuterVolumeSpecName: "config-volume") pod "89561c1b-1faa-4581-9ade-5b4cc4485274" (UID: "89561c1b-1faa-4581-9ade-5b4cc4485274"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 15:45:05 crc kubenswrapper[4750]: I0214 15:45:05.000344 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89561c1b-1faa-4581-9ade-5b4cc4485274-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "89561c1b-1faa-4581-9ade-5b4cc4485274" (UID: "89561c1b-1faa-4581-9ade-5b4cc4485274"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 15:45:05 crc kubenswrapper[4750]: I0214 15:45:05.001361 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89561c1b-1faa-4581-9ade-5b4cc4485274-kube-api-access-lbd9c" (OuterVolumeSpecName: "kube-api-access-lbd9c") pod "89561c1b-1faa-4581-9ade-5b4cc4485274" (UID: "89561c1b-1faa-4581-9ade-5b4cc4485274"). InnerVolumeSpecName "kube-api-access-lbd9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:45:05 crc kubenswrapper[4750]: I0214 15:45:05.095431 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbd9c\" (UniqueName: \"kubernetes.io/projected/89561c1b-1faa-4581-9ade-5b4cc4485274-kube-api-access-lbd9c\") on node \"crc\" DevicePath \"\"" Feb 14 15:45:05 crc kubenswrapper[4750]: I0214 15:45:05.095464 4750 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89561c1b-1faa-4581-9ade-5b4cc4485274-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 15:45:05 crc kubenswrapper[4750]: I0214 15:45:05.095474 4750 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89561c1b-1faa-4581-9ade-5b4cc4485274-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 15:45:05 crc kubenswrapper[4750]: I0214 15:45:05.490716 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" event={"ID":"89561c1b-1faa-4581-9ade-5b4cc4485274","Type":"ContainerDied","Data":"35ada692fd867cf090d48229b6df1a61721d904db26d3233ac0ca9a4901cc668"} Feb 14 15:45:05 crc kubenswrapper[4750]: I0214 15:45:05.490785 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35ada692fd867cf090d48229b6df1a61721d904db26d3233ac0ca9a4901cc668" Feb 14 15:45:05 crc kubenswrapper[4750]: I0214 15:45:05.490872 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518065-vdl4c" Feb 14 15:45:06 crc kubenswrapper[4750]: I0214 15:45:06.024278 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg"] Feb 14 15:45:06 crc kubenswrapper[4750]: I0214 15:45:06.043363 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518020-rl7jg"] Feb 14 15:45:06 crc kubenswrapper[4750]: I0214 15:45:06.765177 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f5ebaa-4091-449c-980c-e755e93f8094" path="/var/lib/kubelet/pods/18f5ebaa-4091-449c-980c-e755e93f8094/volumes" Feb 14 15:45:16 crc kubenswrapper[4750]: I0214 15:45:16.657827 4750 generic.go:334] "Generic (PLEG): container finished" podID="a06d5eb3-be79-41e9-9b6f-ad18672dca78" containerID="cc957ad0192b50c6473ff4b3a5b7b20f65711e1db0517c4e7c8c60db0a3b992f" exitCode=0 Feb 14 15:45:16 crc kubenswrapper[4750]: I0214 15:45:16.657911 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5v79s/must-gather-8q79t" event={"ID":"a06d5eb3-be79-41e9-9b6f-ad18672dca78","Type":"ContainerDied","Data":"cc957ad0192b50c6473ff4b3a5b7b20f65711e1db0517c4e7c8c60db0a3b992f"} Feb 14 15:45:16 crc kubenswrapper[4750]: I0214 15:45:16.659595 4750 scope.go:117] "RemoveContainer" containerID="cc957ad0192b50c6473ff4b3a5b7b20f65711e1db0517c4e7c8c60db0a3b992f" Feb 14 15:45:17 crc kubenswrapper[4750]: I0214 15:45:17.064714 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5v79s_must-gather-8q79t_a06d5eb3-be79-41e9-9b6f-ad18672dca78/gather/0.log" Feb 14 15:45:27 crc kubenswrapper[4750]: E0214 15:45:27.966538 4750 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:46240->38.102.83.36:35453: write tcp 38.102.83.36:46240->38.102.83.36:35453: write: connection reset by peer Feb 14 15:45:30 crc kubenswrapper[4750]: I0214 15:45:30.969982 4750 scope.go:117] "RemoveContainer" containerID="14945fb802b3d19e90d68ef6c64a7998bab5e354d98731335368f7614b91d0b7" Feb 14 15:45:31 crc kubenswrapper[4750]: I0214 15:45:31.013075 4750 scope.go:117] "RemoveContainer" containerID="b2858edb9cdbd83182ee17956e7acdc3cacf6a6b9346803ea9f95b30b080405c" Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.193649 4750 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5v79s/must-gather-8q79t"] Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.198297 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5v79s/must-gather-8q79t" podUID="a06d5eb3-be79-41e9-9b6f-ad18672dca78" containerName="copy" containerID="cri-o://c8056789c1428e1b3f33e96d9ccc97c53695b041b28be5a05b9119c74a4a2dae" gracePeriod=2 Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.212667 4750 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5v79s/must-gather-8q79t"] Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.892283 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5v79s_must-gather-8q79t_a06d5eb3-be79-41e9-9b6f-ad18672dca78/copy/0.log" Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.892918 4750 generic.go:334] "Generic (PLEG): container finished" podID="a06d5eb3-be79-41e9-9b6f-ad18672dca78" containerID="c8056789c1428e1b3f33e96d9ccc97c53695b041b28be5a05b9119c74a4a2dae" exitCode=143 Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.892970 4750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe21560d7c72cb02e34394e69239195626ba1745b5bc1ad173c77a348b0a4815" Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.908663 4750 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5v79s_must-gather-8q79t_a06d5eb3-be79-41e9-9b6f-ad18672dca78/copy/0.log" Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.909221 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.981699 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a06d5eb3-be79-41e9-9b6f-ad18672dca78-must-gather-output\") pod \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\" (UID: \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\") " Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.981757 4750 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55c5z\" (UniqueName: \"kubernetes.io/projected/a06d5eb3-be79-41e9-9b6f-ad18672dca78-kube-api-access-55c5z\") pod \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\" (UID: \"a06d5eb3-be79-41e9-9b6f-ad18672dca78\") " Feb 14 15:45:33 crc kubenswrapper[4750]: I0214 15:45:33.988511 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06d5eb3-be79-41e9-9b6f-ad18672dca78-kube-api-access-55c5z" (OuterVolumeSpecName: "kube-api-access-55c5z") pod "a06d5eb3-be79-41e9-9b6f-ad18672dca78" (UID: "a06d5eb3-be79-41e9-9b6f-ad18672dca78"). InnerVolumeSpecName "kube-api-access-55c5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 15:45:34 crc kubenswrapper[4750]: I0214 15:45:34.087038 4750 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55c5z\" (UniqueName: \"kubernetes.io/projected/a06d5eb3-be79-41e9-9b6f-ad18672dca78-kube-api-access-55c5z\") on node \"crc\" DevicePath \"\"" Feb 14 15:45:34 crc kubenswrapper[4750]: I0214 15:45:34.141468 4750 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06d5eb3-be79-41e9-9b6f-ad18672dca78-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a06d5eb3-be79-41e9-9b6f-ad18672dca78" (UID: "a06d5eb3-be79-41e9-9b6f-ad18672dca78"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 15:45:34 crc kubenswrapper[4750]: I0214 15:45:34.189100 4750 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a06d5eb3-be79-41e9-9b6f-ad18672dca78-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 14 15:45:34 crc kubenswrapper[4750]: I0214 15:45:34.807385 4750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06d5eb3-be79-41e9-9b6f-ad18672dca78" path="/var/lib/kubelet/pods/a06d5eb3-be79-41e9-9b6f-ad18672dca78/volumes" Feb 14 15:45:34 crc kubenswrapper[4750]: I0214 15:45:34.901573 4750 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5v79s/must-gather-8q79t" Feb 14 15:46:30 crc kubenswrapper[4750]: I0214 15:46:30.129464 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:46:30 crc kubenswrapper[4750]: I0214 15:46:30.130520 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:46:31 crc kubenswrapper[4750]: I0214 15:46:31.185441 4750 scope.go:117] "RemoveContainer" containerID="cc957ad0192b50c6473ff4b3a5b7b20f65711e1db0517c4e7c8c60db0a3b992f" Feb 14 15:46:31 crc kubenswrapper[4750]: I0214 15:46:31.260249 4750 scope.go:117] "RemoveContainer" containerID="c8056789c1428e1b3f33e96d9ccc97c53695b041b28be5a05b9119c74a4a2dae" Feb 14 15:47:00 crc kubenswrapper[4750]: I0214 15:47:00.128879 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:47:00 crc kubenswrapper[4750]: I0214 15:47:00.129673 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:47:30 crc kubenswrapper[4750]: I0214 15:47:30.128859 4750 patch_prober.go:28] interesting pod/machine-config-daemon-j5rld container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 15:47:30 crc kubenswrapper[4750]: I0214 15:47:30.129558 4750 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 15:47:30 crc kubenswrapper[4750]: I0214 15:47:30.129629 4750 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" Feb 14 15:47:30 crc kubenswrapper[4750]: I0214 15:47:30.130768 4750 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f8c2ae337c8b2107c86e9c9d956b672e0995b87603f44558e49e8e7af9477a5"} pod="openshift-machine-config-operator/machine-config-daemon-j5rld" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 15:47:30 crc kubenswrapper[4750]: I0214 15:47:30.130842 4750 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" podUID="581740c6-1f28-4471-8131-5d5042cc59f5" containerName="machine-config-daemon" containerID="cri-o://7f8c2ae337c8b2107c86e9c9d956b672e0995b87603f44558e49e8e7af9477a5" gracePeriod=600 Feb 14 15:47:30 crc kubenswrapper[4750]: I0214 15:47:30.415040 4750 generic.go:334] "Generic (PLEG): container finished" podID="581740c6-1f28-4471-8131-5d5042cc59f5" containerID="7f8c2ae337c8b2107c86e9c9d956b672e0995b87603f44558e49e8e7af9477a5" exitCode=0 Feb 14 15:47:30 crc kubenswrapper[4750]: I0214 15:47:30.415121 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerDied","Data":"7f8c2ae337c8b2107c86e9c9d956b672e0995b87603f44558e49e8e7af9477a5"} Feb 14 15:47:30 crc kubenswrapper[4750]: I0214 15:47:30.415506 4750 scope.go:117] "RemoveContainer" containerID="ad9f9d6b7d0fbdb08cecd677a6893c66349b94bfa830661a570c97b4972d35b0" Feb 14 15:47:31 crc kubenswrapper[4750]: I0214 15:47:31.428723 4750 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j5rld" event={"ID":"581740c6-1f28-4471-8131-5d5042cc59f5","Type":"ContainerStarted","Data":"9894b7356ca7148604f0783129aa91306cf485c60e69d2125d11fdc59a2992a1"} Feb 14 15:48:31 crc kubenswrapper[4750]: I0214 15:48:31.336347 4750 scope.go:117] "RemoveContainer" containerID="8a0004d1af99e89e2c6548fb655f39dc4df8e2b667e5fa102089103648c458f9" Feb 14 15:48:31 crc kubenswrapper[4750]: I0214 15:48:31.371179 4750 scope.go:117] "RemoveContainer" containerID="5f75bacefb8108f0c115919302830c3c012eb40fddb63598cca049847116d6f2" Feb 14 15:48:31 crc kubenswrapper[4750]: I0214 15:48:31.397310 4750 scope.go:117] "RemoveContainer" containerID="7e48dd76bacf40f337ae9ff5dfb99067f931fc99417f598f887a0f466725101f"